Recomposing ‘externalities’: Ethical and political challenges to creative AI music
For some time, certain valuable developments in electronic and computer music have questioned instrumental perspectives driven by an ‘innovation’ telos and positivistic methodologies of human-computer interaction in which technologies are figured through ‘command and control’ or stimulus/response paradigms as ‘tools’ for ‘application’ by a reigning human author-subject. Informed by critiques of these paradigms, the kinds of systems created in response have taken forms such as instantiating a ‘nonhierarchical… subject-subject model of discourse’, entailing ‘communication between two subject intelligences’, where the ‘tool’ is rendered capable of ‘the expression of personality, the assertion of agency, the assumption of responsibility and an encounter with history, memory and identity’ (Lewis). Alternatively, or perhaps complementing this, the very interactive system has been extended conceptually via the notion of an ‘audible ecosystem’ that is ‘in continual exchange with the surroundings and with its own history’ (Di Scipio). Here, feedback between machine, human, musical activity and external environment spreads out from performance, work, ‘text, or software, to become enmeshed in further flung social and material networks’ (Green). In this paper I build on these responses but urge the creative AI music community to go further – to extend how the system is conceived and empractised. In a famous 1998 paper, the great STS scholar Michel Callon probed what he called ‘economic externalities’ to point to the way that the science of economics frames its object, the economy. Callon’s prescient aim was to highlight how, routinely, the economy is framed so as systematically to absent from calculation its actual impact on, for example, the environment from which raw materials are drawn and in which toxic waste materials are dumped, or the labouring subaltern populations extracting the raw materials or assembling the silicon chips at the very basis of lively economic markets. It is the countervailing, Callon-esque attention to bringing those ‘externalities’ back in to how we understand and calculate economic – but also social and cultural – processes, impacts and costs that I want to transpose into AI music, asking: what would it mean for artistic practices, and what kinds of ethical and political challenges are thrown up, once the AI music community redefines the boundaries of its activities and brings what are now deemed to be unfortunate ‘externalities’ associated with AI into view as fully part of its own responsibilities? And in line with the conference themes, how would our ideas of authorship and performership but also the ‘work’ be extended and redefined in this radical light?
Georgina Born OBE FBA is Professor of Music and Anthropology, Oxford University. Earlier, she worked as a musician with avant-garde rock, jazz and improvising groups. Her work combines ethnographic and theoretical writings on music, sound, television and digital media. Her books include Rationalizing Culture: IRCAM, Boulez, and the Institutionalization of the Musical Avant-Garde (California, 1995), Western Music and Its Other (California, 2000), Music, Sound and Space (Cambridge, 20013), Interdisciplinarity (Routledge, 2013), and Improvisation and Social Aesthetics (Duke, 2017). She directed the European Research Council funded research program ‘Music, Digitization, Mediation’ and has been a visiting professor at UC Berkeley, UC Irvine, and McGill, Hong Kong, Oslo and Aarhus Universities.
George E. Lewis
Is our machines learning yet? Machine Learning’s Challenge to Improvisation and the Aesthetic
Improvisations by creative musical machines are now often indistinguishable from those created by humans. For many, this is a truly unsettling prospect, not least because musical creation can no longer be portrayed as the exclusive and ineffable province of designated superpeople. However, the advent of musical machine learning has fully corroborated my observation from 2000 that interactions with software-based musical systems tend to reveal characteristics of the communities of thought and culture that produced them. These communities include whoever and whatever the machine and its programmers happen to be learning from, whether it be Google’s early ideology of using machine learning to create “compelling” art and music, or recent public misadventures involving racism in face recognition engines. These issues are not just about the machine, but about the machine as part of a social world, a lesson that interactive musical computing first absorbed in the 1980s. If algorithms that “listen” to a corpus of musical behavior and “learn” to produce musical structures based on that behavior are ultimately reproducing embedded cultural values, how can we create new musical and cultural values from an existing corpus? Perhaps nonmusical uses of machine learning, such as the self-driving car, can move us away from genre, aesthetics, and autonomous universalisms, to realize in machine improvisation John Stuart Mill’s observation that “Human nature is not a machine to be built after a model, and set to do exactly the work prescribed for it, but a tree, which requires to grow and develop itself on all sides, according to the tendency of the inward forces which make it a living thing.”
George E. Lewis, Professor of American Music at Columbia University, is a Fellow of the American Academy of Arts and Sciences and the American Academy of Arts and Letters, a Corresponding Fellow of the British Academy, a MacArthur Fellow, a Guggenheim Fellow, and the recipient of the Doris Duke Artist Award. A member of the Association for the Advancement of Creative Musicians since 1971, Lewis’s compositions (including the widely influential interactive improvisation software, Voyager), have been performed by ensembles worldwide, and he holds honorary doctorates from the University of Edinburgh, New College of Florida, and Harvard University. Lewis is the author of A Power Stronger Than Itself: The AACM and American Experimental Music (University of Chicago Press) and co-editor of the two-volume Oxford Handbook of Critical Improvisation Studies.
Lessons I’ve learned in 13 years of making creative machine learning technology
In 2008, I was a Computer Science PhD interested in using machine learning in music performance, and my understanding of machine learning was quite literally taken from standard computer science machine learning and AI textbooks. Since then, my view of what machine learning is good for, how it should be used, and how to make it usable has been completely transformed by numerous participatory design projects and creative collaborations, as well as teaching music and art students in classes and workshops around the world. In this talk, I’ll present an overview of how and why my thinking has changed, and how my new understanding of machine learning presents implications for the design of machine learning tools, the use of AI in human creative work, and the role that the arts might play in informing the ways we think about AI in society.
Dr. Rebecca Fiebrink is a Reader at the Creative Computing Institute at University of the Arts London, where she designs new ways for humans to interact with computers in creative practice. Fiebrink is the developer of the Wekinator, open-source software for real-time interactive machine learning whose current version has been downloaded over 40,000 times. She is the creator of the world’s first MOOC about machine learning for creative practice. Dr. Fiebrink was previously an Assistant Professor at Princeton University and a lecturer at Goldsmiths University of London. She holds a PhD in Computer Science from Princeton University.