Matadero Madrid center for contemporary creation

Cosmic Brains - Public Symposium and Performance

Desde las 18:00 hasta 22:00 el 25/11/2023
Nave 17. Nave una

The Synthetic Minds, Cosmic Brains: Towards a Gestural Ecology of Mind symposium focuses on the possible relations between human, artificial, and alien intelligence. Our guest speakers—Peter Watts, Nandita Biswas Mellamphy, and Julieta Aranda—are convened by Ed Keller and joined in discussion by leading experts in AI, technology, philosophy, science, and the arts. Their collective work critically engages with the ways that humanity’s future is intrinsically bound to the coming era of ubiquitous synthetic intelligence.

At the core of our response to this transformation are several vital questions: What forms of intelligence will manifest in the coming decade, how will they reshape human life, and crucially, when and how will they achieve sentience and sapience? What 'new forms of mind' will we create, and how will we coexist with them? And, of critical importance, what new interfaces, languages, or protocols of interaction must we develop to facilitate a coexistence? The work of our keynote guests provides insight into the ways that communication, interpretation, metalinguistics, and gesture might develop in the age of AGI, drawing lessons from philosophy, science fiction, technology/media studies, music, and the arts.

The symposium caps a week-long workshop that aims to chart the future of ubiquitous AI, exploring both the vast potential and the existential risks posed by superintelligence.  Our goal for this workshop and public symposium is to facilitate a deeper, systemic approach to the emerging near-future ecologies of synthetic minds. We suggest that a viable cosmopolitics could be catalysed by a trans-planetary structure of feeling which unites organic and artificial/alien cognition.  


[18.00] Introduction by Ed Keller 

Keller will summarise the Cosmic Brains workshop sessions, themes, and goals, introduce the keynote speakers and guests, & moderate the panels.

[18.15-19.00] Part one: keynote by Peter Watts + panel discussion between Watts and Benjamin Bratton (ZOOM)

Peter Watts will discuss models of cosmological mind and the ‘untranslatable’ aspects of alien cognition, as rehearsed in his sci fi novels Blindsight and Echopraxia.  The discussion will ask: What cosmopolitical alternatives might be enabled by only partial, gestural communication? Can we find examples in the evolutionary history of life and mind on our own planet, Earth, that could provide clues to both the factors which governed the emergence of sapience, and which might serve as models for a neutral or even commensal coexistence with AI and ‘the alien’?

[19.15-20.00] Part two: keynote by Nandita Biswas Mellamphy + panel discussion between Biswas and David Roden 

Beginning with the concept of ‘gestural ecology’ as developed by Biswas Mellamphy in her work on Herbert’s Dune, themes will include the neurophysiology of cognition at an individual and a collective scale; the temporalities of cognition available via language; and the nested timescales and ‘umwelt’ of layered and embodied gestures. The inhuman aspects of ecology and the models/conceptual tools necessary to grasp ecological gestures as forms of mind- whether AI or alien- play a key role in determining whether communication is possible with the radically inhuman.

[20.15-21.00] Part three: keynote by Julieta Aranda + panel discussion between Aranda, Peter Watts and Nandita Biswas Mellamphy. 

Through the lens of her work as an artist/curator/theorist/historian, Julieta Aranda will discuss the challenges of alignment between human and alien minds as rehearsed in Stanislaw Lem’s writing.  Asking whether alignment is possible, and if so, what might it look like— we consider the designs we might implement to move toward collaboration between human and AI. Examples from the timeline of human space exploration, alongside Lem’s texts His Master’s Voice, Summa Technologiae, and Solaris, provide context for a universally oriented discussion of the consequences we face at a planetary scale in our current effort to give birth to AI.

A round table with all keynote speakers and guest panellists concludes the symposium.

[21.15-22.00]  Performance: ‘Broken Time’: Free improvisation ensemble with gesture capture, enhancement and AI collaboration

Artists: Ed Keller, Alvaro Domene [zoom], David Roden and other artists TBA.

Description: Performance by free improvisation ensemble exploring gestural capture of sound/embodiment and real time/post processing machine learning/AI tokenization of music, sound and gesture. Pre-recorded and pre-AI analysed material will also be used. 

The goal of this project is to use music and gesture as one component in a larger multimodal system to communicate more effectively with AI. Finding better ways of presenting the haptics and kinaesthetics of the 'real world' for AI is vital to convey the full spectrum of the human lifeworld. Ultimately the concept of the gesture is at the heart of this as in gestures we find lossy codecs which don't allow 100% error corrected communication- but if both parties, the human and the AI, recognize this then we may be able to construct a more viable shared universe. We also suggest that microworlds or pocket universes may constitute a 'safe harbour' for coexistence between species, systems, and types of mind/agency.

Conceptual framework/artist’s statement:This project really has a set of overlapping goals, related to sound, the broader human sensorial field, gesture, and what it might mean to assume sapience is produced in humans when our very finely grained and detailed sensorium processes multimodal information over short and long spans of time [from microseconds to years]. I'm particularly interested in the idea that there are gestures which condition cognition- physical and also n-dimensionally abstract, such as the combination of a musical phrase, the physical movement needed to make that music, and the cultural framework and human emotions and ideas that produce the music. We would like to test the idea that sapience requires a minimum flow of information through a complex cognitive system to emerge. I'm also interested in the idea of ultraconserved gestures- either physical movements that humans have made for thousands of years- or words/sounds we have made for thousands of years. This project is about exploring the relationship between such ultraconserved gestures and cognition/sapience- and developing that relationship, so that AI systems and machine learning can evolve with detailed gestural/sonic information.’  -Ed Keller