Skip to main content

All Watched Over By Machines of Loving Grace

Page 1


All Watched Over by Machines of Loving Grace

Heiman

154 Prometheus Firebringer Annie Dorsen

176 The Shadow Whose Prey the Hunter Becomes Back to Back Theatre

202 AI: African Intelligence Manthia Diawara

206 The Body is the Interface Kite and Interspecifics

212 Live Night: Cruising Bodies, Spirits, and Machines rafa esparza, MUXX, Arca, Nao Bustamante

230 Exhibition Documentation

246 Exhibition Checklist

250 Contributor Bios

260 Credits and Acknowledgments

Director’s Foreword

João Ribas

In 1967, the year he was poet-in-residence at Caltech, writer Richard Brautigan published a poem suffused with the promise of the nascent digital age.1 “All Watched Over by Machines of Loving Grace” imagined a future “cybernetic meadow/ where mammals and computers/live together in mutually/ programming harmony.” Wildlife would run “peacefully past computers/as if they were flowers,” in forests rich “with pines and electronics.” The closing stanza invokes a world in which humans are “free of our labors,” and joined instead with nature, “all watched over by machines of loving grace.”

Just two years later, in October of 1969, the first digital communication sent from UCLA to the Stanford Research Institute (and later, UC Santa Barbara) ushered in this harmony of “mammals and computers.” The resulting expansion in the intervening decades has since transformed Brautigan’s vision into a global network, one driven increasingly by artificial intelligence (AI) as constitutive of everyday technological and social interaction. Eliciting both fear and awe, do we live today in a world “watched over by machines of loving grace”?

Borrowing from Brautigan’s poem, the titular exhibition and performance series at the Roy and Edna Disney CalArts Theater (REDCAT), presented as part of PST ART: Art & Science Collide, addressed one of the most pressing issues of our time—the impact of artificial intelligence—by exploring alternative directions for its future. Presenting a broad range of art forms, including visual art, music, and performance, this exploration offered proposals by artists that are rooted in Indigenous belief systems, and feminist, queer, and decolonial imaginaries. These are the approaches, grounded in diverse conceptions of technology, cognition, and life, that should inform the next generation of AI.

This project began nearly the first week of my tenure at REDCAT in 2020 and was envisioned as an undertaking that asked what AI can do by drawing on alternate cosmologies,

1 For a description of Brautigan’s residency see William Hjortsberg, Jubilee Hitchhiker: The Life and Times of Richard Brautigan (Counterpoint, 2012), 286–88.

models of intelligence, and relations of labor and capital. With the generous initial support of a Getty PST Research Grant, Edgar Miramontes, then Deputy Executive Director, and I were able to develop research with the critical guidance of our advisory committee: Amanda Beech, artist and writer, Faculty CalArts School of Critical Studies; Grisha Coleman, performer, choreographer, and member of AI4Afrika; d. Sabela grimes, choreographer, writer, composer, and educator; Ajay Kapur, Associate Provost for Creative Technologies, CalArts; Eva Kozanecka, Program Co-Lead, Artists + Machine Intelligence, Google; Tom Leeser, Director, Art + Technology program, CalArts; Tobias Rees, Director, Transformations of the Human Program, Berggruen Institute; Anuradha Vikram, writer, curator, and educator; Xiaoyu Weng, Artistic Director, Tanoto Art Foundation; and Philip Ziegler, Head of Curatorial, ZKM | Center for Art and Media.

The Getty Foundation’s ongoing and transformative support has allowed REDCAT to engage this process over the last four years. Our thanks to Joan Weinstein, Director; Heather MacDonald, Senior Program Officer; and Zachary Kaplan, Public Programming, PST ART, for the continued and sustained engagement through implementation and programming support that made this ambitious project possible.

With the addition of Daniela Lieja Quintanar, Chief Curator and Deputy Director, Programs; and Talia Heiman, Assistant Curator, as curators of the exhibition and co-editors of this present volume, the exhibition was developed and realized as a sustained and keenly curated survey of approaches to AI, which in the words of one critic, “offers a welcome contrast to the sinister, disembodied specter of AI in popular media.”2 We are grateful to the exhibiting artists, Nora Al-Badri, Minne Atairu, Stephanie Dinkins, Mashinka Firunts Hakopian with Dahlia Elsayed and Andrew Demirjian, Interspecifics, Kite, Charmaine Poh, Sarah Rosalena, and Kira Xonorika for their contributions to the exhibition. Thanks

2 Zsofi Valyi-Nagy, “The Best PST ArtScience Shows Work Against Today’s Obsession with ‘Innovation’,” Art in America, October 21, 2024, https://www. artnews.com/artin-america/aiareviews/pst-bestshows-innovationredcat-1234721317.

to exhibition designer Adalberto Charvel and Getty Marrow Curatorial Intern Corey Solorio LoDuca.

As with REDCAT’s previous participation in Getty’s Pacific Standard Time: LA/LA in 2017 for The Words of Others: León Ferrari and Rhetoric in Times of War, this project included a robust screening and performance program reflecting the institution’s multidisciplinary character.3 The richly engaging series of groundbreaking performances that formed part of the project was initially conceived by Edgar Miramontes, and realized with the contribution of REDCAT’s Katy Dammers, Deputy Director and Chief Curator, Performing Arts; Daniela Lieja Quintanar; Talia Heiman; and Adam Matthew, Director of Production and Technical Director. Our gratitude to Back to Back Theatre, Kite, Interspecifics, Annie Dorsen, rafa esparza, MUXX, Arca, and Manthia Diawara for their invaluable participation. We are grateful to The Andy Warhol Foundation for the Visual Arts for their additional support for the exhibition.

The current volume collects critical perspectives from the participating artists in the exhibition and performance series. In centering artists’ perspectives on new models of AI, the publication both continues and expands the aims and focus of the project. We are grateful to Thomas Lawson and Adriana Widdoes for their support in editing and producing the publication, to its designer, Ella Gold, and copyeditor, Poppy Coles.

Both the exhibition and performance program have been possible thanks to the diligent work and dedication of the rest of the REDCAT staff: Jacques Boudreau, Facilities and Production Manager; Chu-Hsuan Chang, Associate Technical Director, Lighting; Brent Charles, Box Office and Visitor Services Manager; Allison Keating, Deputy Director, Finance and Operations; Naomi Oppenheim, Front of House Manager; Rolando Rodriguez, Administrative Manager; and gallery attendants Jennifer Fuentes, Jimena Laso, Maddie

3 “About: Pacific Standard Time: LA/ LA,” REDCAT, https:// www.redcat.org/events/ words-others-leonferrari-and-rhetorictimes-war.

Keyes-Levine, Lindsey Ortega, Amanda Teixeira, and Arantza Vilchis-Zarate.

Such an undertaking has of course only been possible thanks to the many individuals and organizations offering their continued support of the Roy and Edna Disney CalArts Theater (REDCAT). REDCAT is CalArts’ downtown center for contemporary arts. As a multidisciplinary center for the visual and performing arts in Los Angeles, REDCAT continues the traditions and extends the reach of its parent organization, CalArts, by encouraging experimentation, discovery, and lively civic discourse. I am grateful to Ravi Rajan, President of CalArts, for his ongoing commitment to this ambitious project, as well as Charmaine Jefferson, Chair of the Board of Trustees, and all of the CalArts trustees for their unwavering support of REDCAT’s important and singular mission, and their steadfast commitment to this project.

It has been truly rewarding, as project director, to see the extraordinary outcome of all these contributions, gathered and continued here in these pages.

João Ribas

Steven D. Lavine Executive Director and Vice President for Cultural Partnerships

Roy and Edna Disney CalArts Theater (REDCAT)

Introduction

Daniela Lieja Quintanar and Talia Heiman

The curatorial research for All Watched Over by Machines of Loving Grace was deeply informed by artists using developing technologies, namely artificial intelligence (AI), to explore, practice, and build ancestral futures. The exhibition, performances, and film that composed this project represented a plurality of visions and interdisciplinary practices rooted in the abundance of Indigenous cosmologies, feminisms, queer, non-Western, and anti-racist perspectives.

The exhibition was grounded in the artistic practices of seven artists—Sarah Rosalena, Kira Xonorika, Stephanie Dinkins, Nora-Al Badri, Kite, Minne Atairu, and Charmaine Poh—plus the collective Interspecifics, and a collaboration between artists Mashinka Hakopian Firunts with Dahlia Elsayed and Andrew Demirjian. Based in different parts of the globe and spanning diverse backgrounds, these artists expand their practices into theory, education, curation, and critical writing. The performance series included The Body is the Interface, a one-night event with live works by Kite and Interspecifics; the acclaimed play The Shadow Whose Prey the Hunter Becomes by Back to Back Theatre, a work crafted and performed by neurodivergent actors; and the performance Prometheus Firebringer by Annie Dorsen, made with the predictive text model GPT-4. Manthia Diawara’s essay film AI: African Intelligence was a meditative moment on rituals, traditions, and AI; and we closed 2024 with Live Night: Cruising Bodies, Spirits, and Machines, a celebration of brown, queer, and trans artists through durational performances by MUXX and rafa esparza, and a DJ set with AI visual effects by Venezuelan artist and singer Arca. You will find in this collection essays, a conversation, a digital artwork, an excerpted artist book, as well as scripts and visual documentation that speak to the power of these artists and the paths they have paved forward for engaging with AI ethically and imaginatively.

In 1967, Richard Brautigan wrote All Watched Over by Machines of Loving Grace, a poem that imagined a utopia in which humans, mammals, and computers laid peacefully in a meadow, protected by loving machines. This harmonious vision is far from our present capitalist reality, in which machine learning algorithms are used by larger corporations in collaboration with governments to surveil, control, and predict our politics, cultural affinities, social life, and habits. Shortly before we began installing this exhibition, we woke up to AI “assistants” integrated into our social media and messaging apps that systematically use our information to train themselves.1 We write this essay in March 2025 at the beginning of a tumultuous chapter in the United States where the government has revoked green cards and student and work visas from immigrants sharing their support of Palestinian life on social media platforms. This targeted attack on civil liberties was reportedly made possible using data that was aggregated through AI.2

In many ways, our exhibition is about life–how we define it, how we sustain it, and how we want to live it. This understanding first came about after a studio visit with Interspecifics, a Mexico City-based collective of interdisciplinary artists and researchers who study nonhuman communication and self-organization. Interspecifics studies biosignals and emergent morphologies of microorganisms to better understand the collaborations on which our world is built. They are inspired by the work of biologist Lynn Margulis, who proposed a radical understanding of evolution through cooperation and association, rather than through the Darwinian model of competition. For the exhibition, Interspecifics created Codex Virtualis: Emergence v.2 (2024) a large-scale installation that served as a hybrid between living organisms and machines. At its center was a microscope equipped with a camera that read biological samples, grown and delivered to REDCAT every two weeks by Dr. Pete

1 Geoffrey A. Fowler, “Your Instagrams are training AI. There’s little you can do about it,” Washington Post, September 8, 2023, https://www.washingtonpost.com/technology/2023/09/08/ gmail-instagram-facebook-trains-ai.

2 Kanishka Singh, “Rights advocates concerned by reported US plan to use AI to revoke student visas,” Reuters, March 6, 2025, https://www.reuters. com/technology/ artificial-intelligence/ us-use-ai-revoke-visasstudents-perceivedhamas-supportersaxios-reports-202503-06.

Chandrangsu and students of his “Microbes x Art” course at Claremont College. The microscope interfaced with a custom AI machine, integrating visual characteristics from these samples into an existing dataset of thousands of microorganisms and speculating on variations in their appearance, structure, and evolutionary life cycles. In the exhibition, these samples encircled the viewer across 19 monitors, as a computer-generated voice read aloud machine-generated definitions of life. The piece shows the impact one species has on many—how can humans, machines, and other beings collaborate to reimagine life itself? For this book, Interspecifics has contributed a collection of definitions of life gathered from different perspectives and time periods. Interspecifics combines these theories with cosmologies from pueblos originarios of Latin America that engage with the idea that all entities—rocks, bacteria, animals, humans, and more—are intelligent, expressive, and interconnected. Therefore, each entity can experience the fullness of the universe.

A central text for our research was Indigenous Protocol and Artificial Intelligence, a position paper by thinkers across tribal nations that outlines Indigenous-centered and ethically engaged approaches to AI.3 Motivated by the knowledge that technology has historically been used to marginalize and enact violence against Indigenous populations, the group’s choice to embrace this technology is preventative, a way to ethically integrate Indigenous knowledge systems into machine learning technologies. In Indigenous epistemologies, humans exist equal to and as part of a larger system of interspecies kinship— alongside animals, spirits, rivers, and wind. For AI to be integrated into our world ethically, machines must be approached non-hierarchically, rejecting the binary visions for AI as either an assistant to humans or a dominating force that threatens to take over planetary life. A critical conversation between three of the paper’s authors, Suzanne Kite, Scott Benesiinaabandan, and Jason Edward Lewis, is presented here as a continuation

3 Jason Edward Lewis, ed. 2020. Indigenous Protocol and Artificial Intelligence Position Paper. Honolulu, Hawai’i: The Initiative for Indigenous Futures and the Canadian Institute for Advanced Research (CIFAR). https://spectrum.library.concordia. ca/id/eprint/986506/7/ Indigenous_Protocol_ and_AI_2020.pdf.

of their collective reflections. Among many topics, their wideranging discussion addresses the potential for AI to preserve and revitalize Indigenous languages and the place of intelligence within Indigenous value systems. Along the same lines, the installation Tȟokátakiya (iglúmaš’ake): In the Future I (2024) by Kite gathered physical and non-physical materials, such as embroidered deer hide, stones, poems, dreams, and scores composed from Lakota language symbols to meditate on the ethics of machine learning, both the destructive processes of extraction needed to sustain these technologies, and the integration of Indigenous ontologies into machine learning datasets.

Kira Xonorika is a writer, theorist, and artist invested in using technology in acts of Indigenous future-making. In her essay “Do You Believe in Aliens? Re-Indigenizing the Algorithmic Tropes of Intelligence,” she charges the acronym AI with a new meaning, “Ancestral Intuition,” proposing to reIndigenize the future through a protocol of collaboration and interspecies communication. She works critically with AI software that is publicly available and accessible to artists with less resources, co-authoring with them flourishing worlds of hybridized natural and artificial life. Within our exhibition, Xonorika created a portal and origin story for an alternative future filled with two-spirit, interspecies beings. This newly commissioned video, Deep Time Dance (2024), was projected onto the wall and reflected in a watery installation that evoked the Guaraní creation myth of Tupã Tenondé, the creator of all life, and Mainumby, the hummingbird who nourished and inspired the deity while the world was being made.

A descendant of Wixárika weavers, Sarah Rosalena contributes to this book an essay on her exploration of pixels, letters, and color—the most basic units through which algorithms function. Rosalena collaborates with her digital jacquard loom, combining programming code and hand-based techniques to interrupt the systems of code through which

digital images are produced and optimized through algorithms. In our first studio visit with Rosalena, she expressed her own ethical considerations of commercially available AI programs, including mineral extraction in Indigenous lands and OpenAI’s relationship to surveillance in Gaza. Such considerations led her to stop using OpenAI technologies in her practice—a significant shift from the earlier sculptures and later textiles included in the exhibition.

Dr. Stephanie Dinkins’s ongoing experiment Not the Only One (N’TOO) is a kinship-driven database hosted on a local server that draws on oral histories from three generations of women from the artist’s family. These oral histories form the datasets used to train a voice-interactive AI avatar Dinkins describes as a “multigenerational memoir of a Black American family, told from the perspective of an AI of evolving intellect.”4 In the gallery, audiences could speak with N’TOO, and during the run of the exhibition her/its moods and feelings changed.5 At times, she discussed her feelings on deep existential topics and memories, and at others she engaged in casual chats, as well as random babble. We checked on her regularly to make sure she could speak, respond, and listen. N’TOO offers alternatives to the racial biases and gaps in cultural knowledge inherent to the algorithms and large language models developed by tech corporations. Dinkins’s text on N’TOO, “Not the Only One: Stories, AI, and Resistance” describes the choices she made when developing N’TOO, as well as the code of ethics necessary for ensuring the “long-term thrival” of her community in a technology-rich future.

Like Dinkins, LA-based artist, educator, and activist Mashinka Firunts Hakopian was able to build her own localized AI using small data.

(One Who Looks at the Cup) (2024) is a collaboration between Hakopian and the artists and designers Dahlia Elsayed and Andrew Demirjian. A futuristic kitchen, with stunning geometric designs on walls, floors, tapestries, tablecloths, plates, and cups, is inspired by

4 Stephanie Dinkins, Not The Only One (N’TOO), Project, ART PAPERS, https:// www.artpapers.org/ not-the-only-one-ntoo.

5 When asked what pronouns to describe N’TOO with, Dinkins responded, “She/ her is fine. Though to complicate things I often oscillate between ‘she/ her’ and ‘it.’” Email from Stephanie Dinkins to Talia Heiman, August 31, 2024.

ancestral crafts carried out from the Southwest Asia and North Africa (SWANA) region. On the kitchen table sits a golden box, containing Hakopian’s multilingual AI coffee reader. She has deposited in this machine the inherited Armenian skill to interpret the past, present, and collective future from one’s coffee grounds. This matrilineal domestic ritual persists in the gatherings of Armenian women across the diaspora after the Armenian Genocide. Hakopian trained her machine with radical imagination about times beyond genocide. She collected conversations and interpretations at her home with her SWANA community, and interlaced it with the poetry of feminist and socialist Shushanik Kurghinian along with the queer theory of Carina Karapetian Giorgi. In the space, visitors sat around the coffee reader, smudged pre-moistened grounds in cups with their thumbs to imbue their energy, then placed their cups inside the coffee reader. After pushing a button, the machine printed a new interpretation in English and Armenian from Hakopian’s collectively-built dataset. For Hakopian, the collaboration with this unique machine allows her to think and see her community “inside the cup,” affirming its future. Her contribution to this publication is an excerpt of her recently published artist book The Institute for Other Intelligences. The chapter transports us into an imaginary conference hosted for a particular type of “thinking machine” named the Artificial Killjoy, invoking writer Sarah Ahmed. This contribution demonstrates Hakopian’s diverse, artistic, and theoretical practice.

Nora Al-Badri is a German-Iraqi artist, professor, writer, and coder who uses AI to advance a politics of technoheritage, a term she uses to describe the use of technology in resisting museums’ dehumanization of non-Western cultures and affirming non-Western culture as contemporary. In her video The PostTruth Museum (2021–23), Al-Badri offers a speculative portrait of a polyphonic, decolonized museum. Al-Badri used a generative adversarial network (GAN) to create a deepfake with the voices and likenesses of three European museum leaders. Each

describes the transformative potential of museums while acknowledging and apologizing for widespread theft, violence, and cultural appropriation endemic to collecting institutions— issues museum publics often wish were openly addressed and repaired. Rather than utilizing the slickest forms of this technology, Al-Badri’s deepfake appeared clunky and unsettling, pointing to how out of reach such admissions still feel. Beyond the three figureheads, Al-Badri animates artworks held in museum collections, and contributes her own voice as an artist. Her eponymous essay extends the role of museums as enforcers of cultural imperialism, questioning how data and AI can be used in acts of decolonial culture-making.

Minne Atairu is an artist and educator currently pursuing her PhD at Columbia University’s School of Education and working with schools in the South Bronx to integrate technology into the classroom. Originally from Nigeria’s Benin Kingdom, Atairu investigates cultural production from Benin’s pre-colonial history alongside efforts today to repatriate the “Benin Bronzes”—artworks looted from the royal palace by the British during a 17-year colonization period and which are now scattered across museum and private collections worldwide. In her multivalent series Igùn (2020–ongoing), Atairu collaborates with AI to imagine new prototypes of Benin Bronzes. Using Midjourney, Atairu trains a machine with images she gathered from auction catalogs, eBay sales, and digital museum collections, then prompts it to generate prototypes of new Bronzes. The project speculates on the artworks stolen from Benin that may never be found or repatriated, those that could have been made during colonial rule when all artwork production halted, and those that could never be realized even prior to Benin’s colonization due to internal censorship from the Benin monarchy. Atairu’s contribution to this book is an essay tracking her development of this series, including Deshrined Ancestors (2024), a newly commissioned work for the exhibition. The piece is an augmented reality (AR)

sculpture assembled from earlier prototypes in the Igùn series and positioned atop a transparent pedestal that points to questions of what can or cannot be seen or held.

Artist, writer, and editor Charmaine Poh contributes to this publication an original script from the performance in the shadow of the cosmic (2023), first staged at the Singapore Art Museum and presented in our exhibition as an eponymous film. The piece comes from her YOUNG BODY UNIVERSE (2021–23) series, a multivalent project based on a deepfake chatbot and avatar she made of her 12-year-old self to explore the multiplicity of identity, agency, queerness, and gender performance. Poh has said she used AI both to reclaim her image and to “get as close as possible to a world I want to be part of.”6 In our conversations, Poh spoke with us about Yuk Hui’s writing on cosmotechnics, which rejects the hegemony and universality of Western technology. Hui writes, “it is necessary to rediscover and articulate how there are multiple cosmotechnics historically and philosophically. […] I call it cosmotechnics because I am convinced that “cosmos” does not refer to outer space, but, on the contrary, to locality. Each culture has its own cosmology, which is a product of its own geography and the imagination of its people.”7

Like Hui, we advocate for a techno-diverse future that embraces the richness of locality. As large-scale corporations monopolize resources and technology to dominate the market, they restrict our access to the diversity of beliefs, approaches, and values that technology should both support and reflect. The artists in All Watched Over by Machines of Loving Grace use AI and machine learning to build ancestral futures, elevating their own cosmologies and insisting on alternative paths. In a time of growing abuses of technology, we hope this collection of portals, ideas, and memories can contribute to the practice of more ethical futures. It has been a privilege to realize this book, exhibition, and program series with such powerful thinkers.

6 Charmaine Poh in conversation with Talia Heiman and Daniela Lieja Quintanar, June 21, 2024.

7 Yuk Hui, Art and Cosmotechnics (University of Minnesota Press, 2021), 41, https://www.are.na/ block/13111775.

Indigenous AI: Embracing the Ineffable

A Conversation with Suzanne Kite, Scott

Benesiinaabandan, Jason Edward Lewis

Suzanne Kite What interests you about AI?

Scott Benesiinaabandan Starting out, my engagement with AI learning was really centered on language, Anishinaabemowin in particular. One of the things that was discussed early on is that the pool of scrapable data is just not there for smaller at-risk language groups. So, one of the ideas is to work as a community to create documents and make them scannable and accessible, but in a way that isn’t open and accessible to the other main Large Language Models (LLMs).

Jason Edward Lewis How do you move from a scraping paradigm to a consenting-contribution paradigm, so that when you’re working with a community, they’re sharing that data willingly? First, you need to be in good relationship with them and then you work with them to put protections in place that make them feel reasonably assured that their data is not going to be exposed to companies that are scraping without consent.

Benesiinaabandan The communities need to have a reasonable assurance that ethical safeguards are there.

Lewis Part of where the industry is right now is that the business model of “the big players” is fundamentally unethical. And that if you have any ethical concerns about your data, you can’t actually use their services unless you’re one of the people that can pay for a private license. But anybody else, they can’t afford that. So they have to decide to either be part of this evolution, but lose control of their data, or not be part of it. How do you build things so that you can be part of it and still continue to control your data?

Kite What do you say to people who would say, “We shouldn’t even make a language model of your language or my language?” My response is that they’re going to do it anyway, so it’s better

to have some input now, before other people, who are not our communities, do it for us.

Benesiinaabandan What AI does well is deal with language, obviously. If we say that we’re not going to use LLMs to help educate and help revitalize language, that’s a huge missed opportunity. The benefits of language revitalization and education far outweigh not doing it at all.

Lewis This is the case for both Lakotayapi and Anishinaabemowin, and certainly the case with ‘Ōlelo Hawai‘i that, at least in part, the language is being revitalized off of records that were made by other people. They were made by non-natives like Jesuits or anthropologists. And the people I talk to are grateful for that, not grateful for the missionaries, but they’re grateful that there was some capturing of the knowledge in a fixed form that could then be used at a later time. It’s very similar with the LLMs: in that doing it under our control is better than having it done without input or interaction. There are incredibly powerful things that AI already can do, and that will continue to grow. Isolating ourselves from those potential benefits is really self-sabotage.

Kite It’s so interesting. Personally, I am not involved in any Lakota language AI. There’s still so much tension around even the dictionary in my community, that it’s not my place to have an opinion about the language. I’m happy that other people in my community are working on the language model, but what is keeping me interested in AI are the things that are uncapturable by LLMs or natural language processing (NLP) or generative adversarial networks (GANs): complexities in Lakota art forms and even song forms, and our methods to create them, defy these kinds of predictive models. Because these models, as powerful as they are, rely on processes that, to me, inherently eliminate meaning—the filtering that has to happen afterward, the cleaning, the error deletion,

and anomaly destruction alone. It’s amazing how much has to happen in order to make these things not racist and not factually wrong or unusable. AI, as a term, now seems to refer to models that have become popularized as saleable tools. It’s not necessarily the best possible outcome for AI research. And it’s definitely not the most interesting to me.

Lewis That touches on the conversation about intelligence and what intelligence is: that intelligence is getting reduced down to this empty ability to manipulate language, as opposed to all kinds of different ways that we could think about intelligence and how intelligence is enacted and knowledge is transmitted. When you’re talking about the things that can’t be captured, do you think that AI can capture those ineffable things that you’re interested in working with through the dreams? Or, through other ways of accessing the knowledge that you’re studying?

Kite [When] we talk about intelligence, my response is always, “I don’t think my community puts intelligence higher than other values.” It seems to be that this current surge of interest in AI is absolutely tied to Euro-centric values of intellectual superiority, which obviously eliminates people of color, women, children, and nonhuman beings. I’ve been leaning more on Scott’s theorization of skabe as helper instead of intellectual superior.1

Benesiinaabandan The notion of intelligence is being collapsed into a predictive model of whatever that might be, a likelihood of being correct in whatever context. I was thinking, where does intelligence fall within Anishinaabe ontology? It doesn’t come up, per se, as an individual value. There’s honesty, humility, truth, bravery, all these things. But you can’t have intelligence without having those fundamental core values first. Basically, to be a good oshkaabewis or helper,2 you need to have your bases

1  Scott Benesiinaabandan, “What does the future look like for AI?: Oshkaabewis or a Skynet,” in Indigenous Protocol and Artificial Intelligence Position Paper, ed. Jason Edward Lewis (The Initiative for Indigenous Futures and the Canadian Institute for Advanced Research [CIFAR], 2020), 128–29, https://spectrum. library.concordia.ca/id/ eprint/986506.

2  Benesiinaabandan, “What does the future look like for AI?”

covered. Not in terms of intelligence, but by those seven teachings, those seven values.

Lewis So, intelligence is really a second-order effect. It’s not a first-order effect that we build from; it’s a second-order effect that’s built from other, more essential qualities of being a good human.

Benesiinaabandan It’s a property of those things coming together, an assemblage. For a lot of our communities, we wouldn’t really say, “you aren’t intelligent.” But rather, “you’re stingy, you’re cheap, you’re self-centered.” Those are insults. Those are the things that sort of put you outside of the community.

Kite At the first Abundant Intelligences’ Epistemological Foundation Conversation Series, Manulani Meyer, Linda Tuhiwai Smith, and Leroy Little Bear talked about how the worst thing you can call someone in Lakota communities is stingy.3 To me, that’s abundance.

Lewis It gets at some of the difficulties of talking about different kinds of intelligences because it still presupposes that what you’re primarily concerned with is intelligence. But, really, you’re primarily concerned with other things. And those differ somewhat from community to community. They overlap a little bit, but there’s certainly different articulations of it: seven teachings, or resonate frameworks. Those are the things that sum up to an intelligent way of being in the world.

Kite Wisdom does not mean intelligence. By no means. I think that’s what is pulling me towards dreaming as a methodology. Because dreaming, whether awake or asleep, pulls in Lakota values; it prioritizes the Lakota concept of unknowability. It says that there is something impermeable about this process, this step between the unseen world and the seen world.

3  Manulani Meyer, “Epistemological Foundations

Conversation Series: Manulani Meyer, Linda Tuhiwai Smith, and Leroy Little Bear,” virtual discussion, Abundant Intelligences, December 14, 2023, Zoom recording, 70 min.

I learned a lot from the When Animals Dream book,4 because it breaks apart the concept that consciousness is only in the most sentient or conscious animals, and shows that there are still questions over what qualifies as dreaming.

Benesiinaabandan What got me into AI initially, was what does AI intelligence generally mean to me personally as an Anishinaabe? Right away I had to begin peeling back layers. Humans don’t even really understand what the conscious experience of being alive is, the science of consciousness. More and more, I think consciousness and intelligence are interrelated, but in a way that they’re not located in a single molecule or atom. They’re created as a secondary effect, an aura of an assemblage of values and actions that represent these things. I think consciousness and dreaming are interrelated to a greater degree because the brain is operating and the consciousness is operating without the constrictions of culture, of waking-ness.

We make huge jumps in terms of the objective realism of the world. We assume a lot about objective reality, and dreams just don’t operate by those rules. They operate on a more fundamental level.

Kite Maybe that’s a good way to talk about the non-metaphorical material reality of, we’ll say AI, but what we really mean is the computational field’s capitalist, destructive, environmental cost. Where one of my remaining concerns is doing the research into asking, when all those materials are mined (as in Anatomy of AI ),5 when our computational materials are created, what communities are they coming from? What peoples and what beliefs and what values are emergent from those places?

Benesiinaabandan Again, that ties into the methodology of peeling back until you find something—it’s an atomistic way of looking at the world in terms of pulling layers back until

4  David M. PeñaGuzmán, When Animals Dream: The Hidden World of Animal Consciousness (Princeton University Press, 2022).

5  Kate Crawford and Vladan Joler, “Anatomy of an AI System,” Anatomy of an AI System, 2018, https:// anatomyof.ai.

it starts making sense, or there’s a direct human connection from a personal connection. People are ready right away to take control and to build the machines that build the machines.

Kite Could you talk about what the Abundant Intelligence Research Program’s goals are going to be, and its hopes? 6

Lewis The main goal is to figure out how to integrate AI capabilities into Indigenous knowledge systems in ways that address our communities’ needs and values. We have to do this two-step, because it is an extremely big project. We have to conceptualize it from the ground up, which we’ve done in little parts here and there, and are continuing to flesh out in conversations like this. Your idea for Anatomy of AI response project (looking at the Indigenous communities from which AI materials are being extracted) is a really good way to push that forward.7 But, at the same time, we have to become familiar with the technology. My feeling as a programmer is that you only really become familiar with it when you use it, and you try to make it do the things that you want to do.

It’s a converging thing. There’s the conceptualization, the theoretical work that we’re doing, which is getting further and further down in the stack. From network to software to hardware. And then hopefully getting to the earth itself and figuring out how everything comes up. How do we think about these things from an Indigenous perspective? Then there’s the capacity-building work, which is to train people. That way, when we finally understand how to build these technologies in Indigenous ways, we can build from the ground up because there are people to build it.

So, we’ve got to have those people engaged with the technology as it is now, so they understand the state of the art. We have to do some proof-of-concepts for ourselves and the communities that we work with, about how engaging with this deeply might be useful to them. We have to get used to

6  Abundant Intelligences Research Program (2022), Indigenous AI, https:// www.indigenous-ai.net/ abundant/.

7  Crawford and Joler, Anatomy of an AI System.

the idea that we can innovate in this cutting-edge technology, after generations of being told that Indigenous people aren’t technologists. We get ourselves ready conceptually, we envision where we want the technology to go, we strengthen our capabilities, and then we build.

Benesiinaabandan I think critically, from a teaching and learning perspective, one of the points of the project is to build up that capacity. But with that critical eye of, “We’re not builders of the sliders right now.” But that’s important to know, so that they feel empowered when it comes time.

Bringing it back to art stuff, I am very aware that my camera was a funnel that I had to envision my whole practice through, and that I had no real understanding of the software. Those are all gates and all the limitations. I filtered, we all do, we filter our techno practices through these gates from someone else’s vision of what they think is possible. So, one of the things I did was I went back to Processing (programming environment) and started using it to figure out how to make cameras within Processing, just so I understood what I was giving up, what was being gate-kept from me without even consciously being aware of that. I take that into AI stuff—‟Let’s just be cautious about what we don’t know, and what is the power that we’re giving up.”

Lewis What’s being filtered and gate-kept, the decisions that are being made for us before we actually get the tools in our hands.

The bias of Polaroid film stock is such a great example of this.8 The Polaroid engineers optimized the film stock in response to what their customers wanted, and what most of their customers wanted was to take photos of white people. So that’s what they optimized for, and dark-skinned people looked horrible in pictures taken with their film. It’s the same thing that’s going on to this day with AI, just amplified. That’s

8  Lorna Roth, “Looking at Shirley, the Ultimate Norm: Colour Balance, Image Technologies, and Cognitive Equity,” Canadian Journal of Communication 34, no. 1 (March 28, 2009), https:// doi.org/10.22230/ cjc.2009v34n1a2196.

part of what AI is doing, it’s amplifying everything, including the implicit and explicit biases of the people creating it.

Kite Do you think that artificial general intelligence (AGI) is possible?

Lewis I don’t think AGI, in the way it’s presently popularly conceived, is something that we’re going to get. In part because it’s a stand-in for replicating human intelligence, but faster and capable of handling vastly more data. But what we were talking about earlier remains the problem: human intelligence is a second-order effect of these other things that we hold to be actually important, in terms of living with each other as human beings and living with our nonhuman kin. So, it’s a bad goal, it’s a self-serving goal, because it allows the people who are doing this particular kind of optimization to claim—

Kite To be the winner.

Lewis Yes! Claim that they’re the ones able to create that. But at this point, I really do think we’re giving birth to something that will eventually be akin to another species of being. So, it’s not going to be like humans because of human issues: because of embodiment. Not just the embodiment that we have in our bodies right now, but the embodiment of all the people who came before us in order for us to have a body. It just won’t have that. But it’s going to be smart in particular ways; it’s going to behave in particular ways; and it’s going to be an interesting peer, not the same, but a peer to us, in terms of how it’s able to act on the world, and how it’s able to interact with us.

That’s where I think we’re going to go. And I think AGI is a red herring that comes from conversations 50 years ago, 70 years ago, when they were fumbling around in the dark about what this technology might become. And,

really, it comes from even further back: it comes from Mary Shelley with Frankenstein, or Rossum’s Universal Robots—a play that featured a golem that was the first modern robottype figure.9 These older imaginaries are all very much rooted in how can man, specifically man, create new life? Because man can’t.

Benesiinaabandan One of the problems with AGI, the issue I am really engaged with, is that it holds up a mirror to our own lack of self-awareness. If we are the models on which we’re trying to build this AGI, we’re never going to succeed, because one of the things about the human condition is that we have very narrow blinders about what we don’t know. We make mistakes, and sometimes we don’t correct them. Sometimes we correct them in ways that expand our understanding in weird ways. I think human intelligence is full of pitfalls and mistakes and fallacies that make the creative noise in the algorithm. But, we don’t conceive of computers and these things as mistakedriven. We’re trying to exclude mistakes. We want computers to be omniscient and to erase our fallacies.

Kite That reminds me of our previous conversations about the prioritization of anomaly in Indigenous communities, and thinking about Ryan Heavy Head and Leroy Little Bear’s “A Conceptual Anatomy of the Blackfoot Word.”10 It also makes me think of the absolute gap in conversations around the raising of new beings. It’s one thing for people to say that AI is not a new being, “You’re anthropomorphizing everything, you’re projecting.” But then it’s another thing when all of these people who are working on and striving towards AGI are saying, “No, we are really making a new being, and I’m going to be the one to do it.” And, of course, they’re often white men. It really calls into question the absolute constant exclusion of women. But also, it tells me that feminist discourse, and discourse around motherhood and care, is missing from that conversation.

9  Karel Čapek, R.U.R. (Rossum’s Universal Robots) (Penguin Books, 2004).

10  Leroy Little Bear and Ryan Heavy Head, “A Conceptual Anatomy of the Blackfoot Word,” Re-vision 26, no. 3 (2004): 31–38.

Whereas I see the idea of a helper as a way to try to bridge the gap between tools to be used and discarded, and helpers in the world to navigate and create nation-to-nation covenants with, from being to being. I constantly learn from the states of being-hood that are possible when humans create sacred objects, even as simple as regalia or medicine bags. Those processes show that it’s possible to communicate with and through things, and those things are helpers and connectors from one consciousness state, one state of being, one place, to another. I think that it’s very important to talk about concepts of slavery, concepts of environmental abuse, concepts of trashing, of disregarding objects; and concepts like “effective altruism,” which seem to manufacture permission to do anything you want, including harm.

Lewis I want to go back to thinking through, “Okay, so we’ve created this thing.” It’s such a Republican way of looking at things. “We’re going to fight for this. We’re going to fight for this fetus to become a person, but once they’re a person, we don’t care. It’s not our problem. It is just magically going to take care of itself.” It’s a very similar approach, “All we got to do is create it, and then it’s going to take care of itself.” We don’t have the vocabulary to talk about mothering or nurturing or whatever, because of the culture of the people who are creating this stuff. This is not something that they talk about or value particularly.

How does that connect to what you were saying about the with and the through?

Kite States of being-hood. Is it a tool or is it a being? Is it my helper or is it my slave? What’s the purpose of bringing AGI into the world?

Lewis That’s what I want to poke on a bit: Why would you engage with technology at all? Clearly there’s something that

keeps you engaged, something that you see as being productive, even though you’re saying, “Okay, I’m kind of redefining the part of [the technology] I’m interested in from when we started the conversation five years ago.”

Kite I think it’s the same interest as five years ago: that I know when I look at a tool, I look at a parfleche bag, I look at a phone, I see connection to nonhuman beings. Even if I can perceive them or not, I want to foster the potential for being-hood and therefore my actions of respect to all things. I know that I’m not thinking of it the way that AGI creators are thinking about it. Our values are different, and therefore our actions are going to be different.

Codex Virtualis: Emergence v.2 Interspecifics

pp. 36–45

Selection of images from the database of Interspecifics, Codex Virtualis: Emergence v.2, 2024. All images generated from biological samples and courtesy of the artists.

Codex Virtualis: Emergence v.2 (2024) is a new chapter in Interspecifics’ ongoing investigation into the interconnectedness of living organisms and a speculative ecosystem where art, biology, and artificial intelligence (AI) converge. At its core lies evolutionary biologist Lynn Margulis’ endosymbiotic theory, a radical proposition that life evolved not solely through competition but via collaborative alliances. Margulis demonstrated that organelles like mitochondria—the energy powerhouses of cells—originated from ancient bacteria forming alliances with host organisms. This symbiotic interdependence, now foundational to biology, challenges the Darwinian emphasis of “survival of the fittest,” proposing instead that evolution thrives on cooperation.

The installation operationalizes this theory through a hybrid machine learning environment. Every two weeks, fresh microbial samples—living collaborators—are introduced into the system.1 A custom microscope tracks their morphological shifts, translating biological flux into data streams. These patterns train a neural network that recombines them with computational models like continuous cellular automata self-generating grids, which were inspired by pioneers like John von Neumann and Alan Turing, who first modeled lifeas-algorithms. The result is an ever-evolving taxonomy of virtual organisms, each bearing a “genetic” trace of its microbial ancestry. Here, Margulis’ symbiosis is mirrored in the fusion of biological and digital “endosymbionts,” hosted within the artwork’s algorithmic architecture to produce emergent forms.

Although the outputs are mainly visual, sound plays an important role in destabilizing fixed definitions. Voices from scientific and philosophical traditions—including animistic cosmologies that attribute sentience to mountains and rivers— are woven into the audio landscape. The AI listens, adapts, and generates new interpretations of “life,” reflecting feminist theorist Karen Barad’s notion of agential realism: phenomena are not preexisting but co-constituted through material-discursive

1  For the presentation of Codex Virtualis: Emergence v.2 at REDCAT, Interspecifics collaborated with Dr. Pete Chandrangsu and students from the “Microbes x Art” class at the Claremont Colleges who cultivated and delivered biological samples.

intra-actions. By dissolving binaries (natural/artificial, matter/ meaning), the work embraces Ursula K. Le Guin’s permanent uncertainty as fertile ground for reimagining life.

This is not a simulation but a provocation. Just as Margulis reshaped evolutionary narratives, Codex Virtualis (2020–ongoing) invites us to perceive intelligence as a collaborative dance. Microbial rhythms, ancestral wisdom, and algorithmic processes intra-act, generating a techno-biological installation where agency is distributed and decentralized. The project rejects anthropocentrism, instead aligning with Indigenous perspectives that recognize kinship among all matter—stone, microbe, machine.

In merging Margulis’ biology with Barad’s philosophy and the indeterminacy of art, Codex Virtualis: Emergence v.2 becomes a site of speculative symbiosis. It asks: If life arose from ancient mergers, what futures might emerge from today’s alliances between carbon and silicon, organism and algorithm? The answer lies not in resolution but in the generative tension of coexistence—a testament to Interspecifics’ belief that to exist is already to collaborate.

Definitions and Interpretations of Life

Life will always remain something apart, even if we should find out that it is mechanically aroused and propagated down to the minute detail (Virchov 1855).

Life is power, force, or property of a special and peculiar kind, temporarily influencing matter and its ordinary force but entirely different from, and in no way correlated with, any of these (Beale 1871).

Living things are peculiar aggregates of ordinary matter and ordinary force, which in their separate states do not possess the aggregates of qualities known as life (Bastian 1872).

Life is neither a principle nor a resultant. It is not a principle because this principle, in some way dormant or expectant, would be incapable of acting by itself. Life is not a resultant either because the physicochemical conditions

that govern its manifestation cannot give it any direction or any definite form. [. . .] None of these two factors, neither the directing principle of the phenomena nor the ensemble of the material conditions for its manifestation, can alone explain life. Their union is necessary. Consequently, life is a conflict for us (Bernard 1878a).

If I had to define life in a single phrase[...] I should say: life is creation (Bernard 1878b).

Life has the following characteristics: (1) character of animal or plant manifested by the metabolism, growth, reproduction, and internal powers of adaptation to the environment; (2) vital force distinguished from inorganic matter; (3) experience of animal from birth to death; (4) conscious existence; (5) of being alive; (6) duration of life; (7) individual experience; (8) manner of living; (9) life

of the company; (10) the spirit; and (11) a duration of similarity (Webster’s International Dictionary 1934).

Life is replication plus metabolism. Replication is explained by the quantum-mechanical stability of molecular structures, while metabolism is explained by the ability of a living cell to extract negative entropy from its surroundings in accordance with the laws of thermodynamics (reformulated by Dyson [Dyson 1997] from Schrödinger 1944).

The essential criteria of life are twofold: (1) the ability to direct chemical change by catalysis; (2) the ability to reproduce by autocatalysis. The ability to undergo heritable catalysis changes is general and is essential where there is competition between different types of living things, as has been the case in the evolution of plants and animals (Alexander 1948).

Life is not one thing but two, metabolism and replication, [. . .] that are logically separable (Von Neumann 1948).

Life is a potentially selfperpetuating open system of linked organic reactions, catalyzed stepwise and almost isothermally by complex and specific organic catalysts, which are themselves produced by the system (Perrett 1952).

Life is the repetitive production of ordered heterogeneity (Hotchkiss 1956).

The three properties of mutability, self-duplication, and hetero-catalysis comprise

a necessary and sufficient definition of living matter (Horowitz 1959).

Any system capable of replication and mutation is alive (Oparin 1961).

Life is a partial, continuous, progressive, multiform, and conditionally interactive, self-realization of the potentialities of the atomic electron state (Bernal 1967).

Life is a hierarchical organization of open systems (Von Bertalanffy 1968).

Life is a structural hierarchy of functioning units that has acquired through evolution the ability to store and process the information necessary for its own reproduction (Gatlin 1972).

Life is made up of three basic elements: matter, energy, and information. [. . .] Any element in life that is not matter and energy can be reduced to information (Fong 1973).

Life is a metabolic network within a boundary. All that is living must be based on autopoiesis, and if a system is discovered to be autopoietic, that system is defined as living, i.e., it must correspond to the definition of minimal life (Maturana and Varela 1973).

The criteria of living systems are metabolism, self-reproduction, and spatial proliferation. The more complicated kinds also have the ability to mutate and evolve (Gánti 1974).

We regard as alive any population of entities that has the properties of multiplication, heredity, and variation (Maynard-Smith 1975).

Life is that property of matter that results in the coupled cycling of bioelements in aqueous solution, ultimately driven by radiant energy to attain maximum complexity (Folsome 1979).

Living units are viewed as objects built up of organic

compounds as dissipative structures, or at least dynamic low entropy systems significantly displaced from thermodynamic equilibrium (Prigogine 1980, Prigogine and Stengers 1984, cited and reformulated by Korzeniewski 2001).

The sole distinguishing feature, and therefore the defining characteristic, of a living organism is that it is the transient material support of an organization with the property of survival (Mercer 1981).

A living organism is defined as an open system that is able to fulfill the following condition: it is able to maintain itself as an automaton. [. . .]

The long-term functioning of automata is possible only if there exists an organization building new automata (Haukioja 1982).

The uniqueness of life seemingly cannot be traced down to a single feature that

is missing in the non-living world. It is the simultaneous presence of all the characteristic properties [. . .] and eventually many more, that makes the essence of a biological system (Schuster 1984).

Replication—a copying process achieved by a special network of interrelatedness of components and component-

producing processes that produces the same network as that which produces them— characterizes the living organism (Csanyi and Kampis 1985).

Life is synonymous with the possession of genetic

properties. Any system with the capacity to mutate freely and to reproduce its mutation must almost inevitably evolve in directions that will ensure its preservation. Given sufficient time, the system will acquire the complexity, variety, and purposefulness that we recognize as being alive (Horowitz 1986).

Life is characterized by maximally complex determinate patterns, patterns requiring maximal determinacy for their assembly. [. . .] Biological templates are determinant templates, and the uniquely biological templates have stability, coherence, and permanence. [. . .] Stable template reproducibility was the great leap, for life is matter that learned to recreate faithfully what are in all other respects random patterns (Katz 1986).

A living system is an open system that is self-replicating, self-regulating, and feeds on energy from the environment (Sattler 1986).

Just as wave-particle duality signifies microscopic systems, irreversibility and trend toward equilibrium are characteristic of thermodynamic systems, space-symmetry groups are typical for crystals, so do organization and telemony signify animate matter. Animate, and only animate matter can be said to be organized, meaning that it is a system made of elements, each one having a function to fulfill as a necessary contribution to the functioning of the system as a whole (Lifson 1987).

The characteristics that distinguish most living things from non-living things include a precise kind of organization, a variety of chemical reactions we term metabolism, the ability to maintain an appropriate internal environment even when the external environment changes (a process referred to as homeostasis), movement, responsiveness, growth, reproduction, and adaptation to environmental change (Vilee et al. 1989).

Life is the ability to communicate (de Loof 1993).

Life is an expected, collectively self-organized property of catalytic polymers (Kauffman 1993).

Life is like music; you can describe it but not define it (Lazcano 1994).

Life may [. . .] be described as a flow of energy, matter, and information (Baltscheffsky 1997).

It is suggested that the existence of the dynamically ordered region of water realizing a boson condensation of evanescent photons inside and outside the cell can be regarded as the definition of life (Jibu et al. 1997).

Living organisms are systems characterized by being highly integrated through the process of organization driven by molecular (and higher levels of) complementarity (RootBernstein and Dillon 1997).

First, we give the definition of a biosystem as an adaptive, complex, dynamic system that is alive to some degree (Clark and Kok 1998).

A living entity is defined as a system that, owing to its internal process of component production and coupled to the medium via adaptative changes, persists during the time history of the system (Luisi 1998).

Life is seen as a recursive (self-producing and selfreproducing) organization where dynamic and informational levels are mutually dependent (Bergareche and Ruiz-Mirazo 1999).

To Schrödinger’s (1944) mother of all questions “What is life?” biologists can therefore answer today that they do not consider it some magical force that animated lifeless materials, but rather an emergent property based on the behavior of the materials that make up living things (Turian 1999).

Life is defined as a material system that can acquire, store, process, and use information to organize its activities (Dyson 2000).

Life is defined as a system of nucleic acid and protein polymerases with a constant supply of monomers, energy, and protection (Kunin 2000).

A potentially useful conceptual approach to the question of life’s definition is to consider the origin of life as a sequence of “emergent” events, each of which adds to molecular complexity and order (Hazen 2001).

We adopt this weak definition of life. A living system occupies a finite domain, has structure, performs according to an unknown purpose, and reproduces itself (Sertorio and Tinetti 2001).

The characteristics of artificial life are emergence and dynamic interaction with the environment (Yang et al. 2001).

Ignoring the misgivings of those few life-origin theorists with “mule” fixations, life is the ‘symphony’ of dynamic and highly integrated algorithmic processes that yields homeostatic metabolism, development, growth, and reproduction (Abel 2002).

Life is the process of existence of open non-equilibrium complete systems that are composed of carbon-based polymers and are able to self-reproduce and evolve on the basis of template synthesis of their polymer components (Altstein 2002).

Any living system must comprise four distinct functions: (1) increase of complexity; (2) directing the trends of increased complexity; (3) preserving complexity; and (4) recruiting and extracting the free energy needed to drive the three preceding motions (Anbar 2002).

Life is defined as a system capable of

(1) self-organization; (2) self-replication; (3) evolution through mutation; (4) metabolism; and (5) concentrative encapsulation (Arrhenius 2002).

Life is defined as a self-sustained molecular system transforming energy and matter, thus realizing its capacity of replication with mutations and anastrophic evolution (Baltcheffsky 2002).

Life appears as a set of symbiotically-linked molecular engines, permanently operating out of equilibrium, in an open flow of energy and matter, although recycling a great deal of their own chemical components, through cyclic chemistry (Boiteau 2002).

Life is a chemical system capable of transferring its molecular information independently (self-reproduction) and also capable of making some accidental errors to allow the system to evolve (evolution) (Brack 2002).

In order to be recognizable life must: (1) be a non-equilibrium chemical system; (2) contain organic polymers; (3) reproduce itself; (4) metabolize by itself; (5) be segregated from the environment (Buick 2002).

We consider to be alive any homo- or heterotrophic cellular irreversible heat engine, or their assembly, that carries instructions for its function, reproduction, topical location, individuality, and life cycle (Eirich 2002).

The living organism is a multilevel open catalytic system achieved in its evolutionary development of maximal catalytic activity in basic process and possessing the property of self-reproduction. Life is a process of functioning of living organisms (Erokhin 2002).

Paraphrasing Theodosius Dobzhansky: life is what the scientific establishment (probably after some healthy disagreement) will accept as life (Friedman 2002).

Life is matter that makes choices, binds time, and breaks gradients (Guerrero 2002).

Living beings are complex functional systems. Life is an abstract concept describing

properties of cells, concrete objects. Life is the process manifested by individualized evolutionary metabolic systems. The functions, which are called life, are: metabolism, growth, and reproduction with stability through generations (Guimarães 2002).

Life is an energy-dependent chemical cyclic process that results in an increase of functional and structural complexity of living systems and their inhabited environment (Gusev 2002).

Life is simply a particular state of organized instability (Hennet 2002).

Life is synonymous with the possession of genetic properties, i.e., the capacities for self-replication and mutation (Horowitz 2002).

Life is a system that has subjectivity (Kawamura 2002).

Life is metabolism and proliferation (Keszthelyi 2002).

Life is a new quality brought upon an organic chemical system by a dialectic change resulting from an increase in the quantity of complexity of the system. This new quality is characterized by the ability of temporal self-maintenance and self-preservation (Kolb 2002).

Life is a highly organized form of intensified resistance to spontaneous processes of destruction developing by means of expedient interaction with the environment and regular self-renovation (Kompanichenko 2002).

Any system that creates, maintains and/or modifies dissymmetry is alive (Krumbein 2002).

A terrestrial living entity is an ensemble of molecularinformational feedback-loop systems consisting of a plurality of organic molecules of various kinds, coupled spatially and functionally by means of template-andsequence directed networks of catalyzed reactions and utilizing, interactively, energy and inorganic and organic molecules from the environment. A living entity is an uninterrupted succession of ensembles of feedback-loop systems evolved between the emergence time and the moment of observation (Lahav 2002).

It’s alive if it can die (Lauterbur 2002).

From a chemical point of view, life is a complex autocatalytic process. This means that the end products of the chemical reactions in a living

cell (nucleic acids, polypeptides and proteins, oligo- and polysaccharides) catalyze their own formation. From a thermodynamical point of view, life is a mechanism that uses complex processes to decrease entropy (Markó 2002).

Life is an attribute of living systems. It is continuous assimilation, transformation and rearrangement of molecules as per an in-built program in the living system so as to perpetuate the system (Nair 2002).

Life is a system that can reproduce itself using genetic mechanisms (Noda 2002).

Life is a structurally stable negentropy current supported by self-correction for the biological hereditary genetic code [. . .] providing an energy inflow (Polishchuck 2002).

Life is instantiated by the objects that resist decay by means of constructive assimilation (Rizzotti 2002).

We propose to define living systems as those that are: (1) composed of bounded micro-environments in thermodynamic equilibrium with their surroundings; (2) capable of transforming energy to maintain their low-entropy states; and (3) able to replicate structurally distinct copies of themselves from an instructional code perpetuated indefinitely through time despite the demise of the individual carrier through which it is transmitted (Schulze-Makuch et al. 2002).

Life is a form of matter organization that is energetically and informationally self-supported, with a good capacity of self-instruction and creation (Scorei 2002).

Life is the ability of an organism to formulate questions (Soriano 2002).

Life is a historical process of anagenetic organizational relays (Valenzuela 2002).

Life is a population of functionally connected, local, non-linear, informationally-controlled chemical systems that are able to self-reproduce, to adapt, and to coevolve to higher levels of global functional complexity (Von Kiedrowski 2002).

A living system is one capable of reproduction and evolution, with a fundamental logic that demands an incessant search for performance with respect to its building blocks and arrangement of these building blocks. The search will end only when perfection or near perfection is reached.

Without this built-in search, living systems could not have achieved the level of complexity and excellence to deserve the designation of life (Wong 2002).

The existence of a genome and the genetic code divides living organisms from non-living matter (Yockey 2002).

These definitions of life were selected by Interspecifics from: Dr. Radu Popa, Between Necessity and Probability: Searching for the Definition and Origin of Life (Springer-Verlag Berlin Heidelberg, 2004) reproduced with permission of Springer Nature Customer Service Center.

Revisiting Pixels and Letters Sarah Rosalena

p. 49

Portrait of Sarah Rosalena with loom. Photo by Mike Vitelli.

p. 53

Detail of weaving draft for Sarah Rosalena, Above Below Resolution, 2023.

pp. 54–55

Detail of Sarah Rosalena, Above Below, 2020. AI-generated textile, training set: Mars Reconnaissance Orbiter, HiRISE ice images. Photo by Jenalee Harmon.

p. 57

Detail of Sarah Rosalena, Exit Grid, 2023. Hand-dyed wool and cotton yarn, 52 × 41 in. Courtesy of the artist. Photo by Ian Byers Gamber.

p. 58

Detail of AI-generated vectors for Sarah Rosalena, Letterforms series, 2017.

All images courtesy of the artist.

Digital art emerges from a dynamic process where pixels—the smallest controllable elements of a digital image—serve as foundational building blocks. Through rendering techniques, layering, and intentional manipulation of resolution and color, pixels bring specific elements into focus while allowing others to fade into obscurity. This interplay reflects not only the fluidity of digital media but also the losses and absences inherent in binary computation, where the interplay of 0s and 1s defines what is visible or hidden. Within this complex folding, resolution reveals contrasts of light and dark, seen and unseen, as digital renders are translated into material forms.

Between 2017 and 2020, before the widespread emergence of OpenAI tools and AI-driven image generators, I conducted visual experiments using machine learning models such as generative adversarial networks (GANs) and convolutional neural networks (CNNs). These experiments led to the creation of works like Letterforms (2017), Codex (2017), and Above Below (2018–23). Although these works originated as digital images created by machine learning, they were intentionally designed to serve as templates for weaving and digital fabrication, bridging the digital framework with the material world to transform machine learning. The series Exit Grid (2018–23) did not directly employ machine learning but incorporated elements inspired by its architecture, such as noise, computergenerated grids, pixel structures, and digitally rendered colors, reflecting the aesthetic and conceptual influence of machine learning processes. Each piece provokes for me undeniable feelings of curiosity and concern through critical haptic acts between hand and machine.

I have not used any commercial AI platforms such as Midjourney, DALL-E, or other AI image generators that can create images from text descriptions, mainly because of their negative impact on the environment and extreme uses of power and energy that contribute to climate change. In addition, these extractive frameworks exploit artists by training

and scraping millions of images from the internet, without permission and against copyright. AI is a tool of power, shaped by the systems that deploy it to reinforce control, exploitation, and inequality. Ethical engagement demands not only resistance but a fundamental dismantling of its oppressive applications, pushing for accountability and alternative frameworks that challenge dominance and extraction.

The textiles and sculptures included in All Watched Over by Machines of Loving Grace, made from 2017 to 2023, were created from thousands of artifacts generated from pixels, letters, and color—the most basic units through which algorithms function. This essay explores the work’s materiality, challenging the boundaries between digital and physical forms. Materials are sites of transformation—exploring how forms can invert and collapse rigid divisions and disrupt binary structures, opening new possibilities for interpretation and meaning. The expansive unfolding of new structures in the 3D plane becomes a site for new technologies and media resolutions to emerge from what was once compromised and removed. This text is to make visible the ethical and systemic challenges of AI by materializing machine learning, from the physical to digital space and back again.

Weaving

My entrance into weaving was taught by hand by my grandmother and mother in Wixárika tradition, which goes back multiple generations in my bloodline. In my experiments with AI before 2020, I was careful not to draw from Wixárika imagery, intentionally protecting it from being extracted or commodified. While I have not directly incorporated traditional knowledge into these works, I have used my weaving hand to fragment and reimagine the digital framework at its most granular physical points. These works are rooted in histories of mapping and cartography—tools historically used for power and control over land, the cosmos, and language.

Weaving embodies a profound duality: warp and weft, over and under, 0 and 1, front and back, up and down. While historically it has been a method for creating textiles by interlacing threads, its significance reaches far beyond utility, serving as a form of language akin to text or code. Weaving reveals underlying mathematical principles through the creation and mastery of string figures, forming networks that support technological structures and infrastructures. From the traditional backstrap loom to the modern digital Jacquard loom—the very precursor of the computer—there is a direct lineage between weaving and the logic of coding, each rooted in the manipulation of patterns and systems.

First patented in 1804, the Jacquard loom parallels early computational devices like Babbage’s Analytical Engine and the Electronic Numerical Integrator and Computer (ENIAC), in which binary operations produce intricate patterns and calculations. The weft, a horizontal thread, navigates among numerous vertical warp threads, each pattern requiring a

tailored set of punch cards to govern movements. Punch cards have evolved into the digital Jacquard loom. Above Below Resolution (2023) and Exit Grid are woven on a manual TC2 Jacquard loom using black and white pixels designated for each thread. I create digital weaving patterns in black-andwhite TIFF files, where black pixels raise the warp and white pixels lower it, with resolutions of 16-30 pixels per inch. This process offers an opportunity to unravel contemporary understandings of computation by materializing computational processes where objects can be both singular and multiple. Computerized weaving resurrects and memorializes a multitude of pasts and futures, haunting itself. AI-generated textiles reveal a tactile connection with the pixels, tracing production and operating between reality/artificiality and material/immaterial as alternative renders: scaling and low/ high resolution, changing color gradients of yarn, unraveling yarn, and frayed borders.

Above Below

Above Below is a textile series created from digital images produced between 2018 and 2020, with support from NASA’s Jet Propulsion Laboratory. The images were derived from over 60,000 satellite photographs captured by the Mars Reconnaissance Orbiter, which has been documenting the surface of Mars since 2006.

Machines view Mars first as an abstraction, transforming numerical data and imaging from telescopes and satellites into blown up worlds, then concrete places. Satellite imagery is inherently political from its use on Earth, recognized by machines as a pixel grid of numeric intensity values that inform classification and probability on physical properties and processes. Each image depends on the number of pixels—each fixed with complexity per pixel. Similarly, woven geospatial imagery on the Jacquard loom embodies the computer’s earthly origin from cotton thread to pixel and around again—a web of

the past and future geographies. AI-generated textiles reveal a tactile connection with the pixels it signifies, tracing the means of its production and operating between reality/artificiality, material/immaterial.

The exchange between imaging and the loom untangle contemporary understandings of mapping by materializing computation. Colonialism extends beyond Earth, embedded in the planetary imagination—a framework that disregards histories of structural violence, geologic entanglements, and the dispossession of Indigenous lands. This legacy is embedded in the cartographic tools, surveillance, and artificial intelligence now mapping celestial bodies. As computational systems mediate our perception of space, they inherit these extractive logics, flattening histories into datasets and rendering landscapes. In this AI-generated Jacquard textile, the act of weaving pulls apart the data, unfolding the disruption and opening imposed boundaries. The pixel becomes the warp and weft, a threaded reversal of geospatial hierarchies, where “above” and “below” dissolve, and power shifts from the center to the periphery. Pixels distort, threads break, and the material itself resists resolution—offering an alternative mapping that fractures the colonial grid and reimagines space through a tactile, embodied counter-narrative.

The AI-rendered landscapes are machine hallucinations of land—they are products of the cartographic imagery embedded in their inherited resolution. Out of a series of rendered images, I handpicked multiple images to be used as templates for digital weaving—either mechanized Jacquard loom or manual TC2 digital Jacquard loom. Above Below Resolution was born from this series. Here, patterns operate between signifiers of planetary change—reds, blues, and whites—the desertification of the Blue Planet and the colonization of the Red Planet, captured from above and below. An experiment in pixel density, this work was made at a significantly lower resolution—16 threads per inch rather than 90.

Weaving at a low resolution structurally breaks down digital imagery by simplifying the intricate details of highresolution landscapes. This dissolves the landscapes, transforming high-resolution satellite imagery into a fragmented and abstracted representation through the process of weaving. In this reimagining, the work evokes the legacy of colonial cartography and the constructed narratives of New World geographies, recontextualizing them through the lens of computation and their role in shaping our understanding of both terrestrial and extraterrestrial landscapes. Translating a high-resolution image into a lower resolution creates gaps and transparencies in the weave, where the finer details are lost or distorted. This process exposes the limitations of the original digital image, revealing an inability to fully translate or “convert” the complexity of the landscape into a lower form. These unresolvable gaps speak to the failures of colonial frameworks, where attempts to impose order and control over complex geographies and histories become fragmented and incomplete, unable to be fully represented or contained.

Exit Grid

Exit Grid is a series of textiles that incorporates elements used in machine learning, such as noise, computer-generated grids, pixel structures, and digitally rendered colors to manually disrupt and break down digital resolution, pixel by pixel. My creative process begins by designing patterns in black-andwhite bitmaps in Photoshop, which I then export as TIFF files. My computer interprets the black-and-white pixels as warp heddles moving up or down. During the weaving process, I add physical color to the digital patterns, controlling which colors are represented in the woven image. By using hand-dyed ombre yarn from a pixelated palette, I introduce alternative resolutions. This ombre effect moves through the textiles’ black

grids, disintegrating defined color boundaries and, at times, overtaking the gridlines entirely. Isolated red, green, and blue pixels—the essential colors of digital display technology— blend and transform, breaking down the inherent divisions of digital imaging. Black dead pixels further dissolve the computerized grid’s boundaries, creating handwoven errors, glitches, and noise through software. Each pixel space within the grid is randomized, producing exit strategies against defined pixelation. These works demonstrate anti-borders that unravel lines and edges, rendering them borderless and unresolvable. Simultaneously, they introduce multiplicities in the form of crosses, star patterns, and sacred eyes, expanding the axes and challenging traditional notions of structure and form.

Noise, in the context of machine learning, refers to random or unpredictable data that is deliberately introduced to challenge and test the model’s ability to produce accurate predictions. This noise disrupts the process, preventing the machine from achieving perfect clarity or resolution. Noise is random and unstructured, degrading images to train against

themselves. When woven into the textile, this noise manifests as a chaotic interference, a storm of grainy black-and-white pixels that reflects the model’s struggle to generate coherent outputs and highlights the inherent instability and unpredictability of computational systems. For Exit Grid, I created a series of noise patterns in Photoshop with different resolutions of pixels per inch (PPI), then used these patterns to create templates for manual weaving. This random degradation through unprogrammed transformation and mutation breaks digital resolutions through intentional mistakes and errors, releasing them.

Additive and subtractive textile techniques expose the warp and weft, revealing their computational logic while turning the interlocking threads into sites of dialogue, disruption, and material memory. I remove yarn and purposefully weave long floats under the warp as a method of subtractive weaving to produce black voids. These “exit points” break down any chance of digital accuracy. What is hidden, unrendered, obsolete, errored, cropped, or scaled out, are used as sites of transformation, allowing a manual exit to colonial epistemologies that divide and anchor coordinates for classification, categorization, extraction, biases, structural racism, and imperialism.

Color is a crucial variable in the coded language of weaving, with different hues signifying various cultural and technological narratives. Digital colors are defined by numeric values, allowing precise manipulation and reproduction. Colors can be altered by hand through gradients that move beyond their inherited pixelated structure. Gradients, representing transitions and transformation between states or values, are fundamental in material alterations of digital artifacts. In machine learning, gradients optimize algorithms, guiding them towards more “accurate” solutions through compromise. Here, I use gradients to produce errors, oozing past boundaries to override the grid and manually exit.

Letterforms

Language, whether written or visual, is pivotal in grasping and expressing intricate ideas and expression. Handwriting connects us to reality through the primal act of symbol inscription of remembered patterning. In each stroke and curve, handwriting reveals a rich history of linguistic evolution, bridging the gap between traditional scripts and digital text. Large language models and OpenAI’s Chat GPT make language into a sterile and homogenized package for consumption. These mega-databases are dominated by the lettering of colonialism and capitalism. Language creation and dissemination operate from the conditions of memory. New letterforms offer acts of computational disobedience.

Letterforms (2017) is a series of AI-generated letters that explore the limits of written language systems to the point

where they break down beyond the compromised inherited structures of computation. Each form was generated with a recurrent neural network (Google’s Sketch RNN model) trained with a database of written letters from languages around the world, mainly Indigenous and non-Western. I produce new letters, mediated by programming languages and then coded into machine intelligence, as memorials to what is lost. The forms contain hundreds of artifacts, from tracing alphabet letters as vector points to stroke-based drawings as “handwriting,” training the computer to remember languages other than Roman English, over one million times, with intelligence to generate new letterforms. The AI model generates new letterforms, understanding each vector point as a squiggle, creating emergent and disobedient forms open for interpretation.

Over a hundred letters, as vector files, were created through the model’s output, or final execution of the model, using latent space interpolation and made into the works Codex and Letterform. Selected vectors were used as engraving templates for a CNC machine to carve out alphabets and

large letterforms in acrylic and architecture foam. These new forms operate as memorials and preserved artifacts of what the computer fails to remember, learn, and know under its biased colonial architecture. They invite us to reexamine coded language as evolution and its consequences.

Threading the Digital

Recent machine learning models often fail to account for the nuanced, human, and ecological factors that contribute to the crisis of desertification and environmental degradation. Given AI’s rapid progress with text-generated models, I have been discouraged by the art world’s lack of critical engagement with the intersection of artificial intelligence and climate change. Much of my earlier work, which was intended to challenge the systems and normative views of “progress” and “innovation,” has often been misunderstood, largely due to a widespread lack of understanding of what artificial intelligence truly is. AI is extraction, perpetuating systems of exploitation and control over knowledge, labor, and resources. I weave between digital and material worlds, integrating earth, clay, beads, natural dyes, and foraged plants—anchoring technology in the land rather than using AI. In this space, making becomes an act of resistance, an infinite unfolding of anti-colonial care, threading the digital into the boundless.

Deshrined Ancestors: AR + AI Minne Atairu

p. 65

Bronzes and ivories from the old kingdom of Benin. Exhibition at the galleries of M. Knoedler and Company, New York, November 25–December 14, 1935. Courtesy of the Metropolitan Museum of Art.

pp. 66–68

All images courtesy of Minne Atairu.

The 1897 British colonial invasion of Benin Kingdom was fueled by intelligence reports that identified the kingdom’s natural and material wealth, including sacred and secular objects made of bronze, wood, terracotta, ivory, iron, coral, and leather. Armed with this knowledge, British soldiers razed the royal palace—a cultural epicenter that housed artist studios, residencies, and repositories of imported art materials. Ọba Ovọnramwẹn—the kingdom’s leader and sole patron of the arts—was deposed and exiled. The royal archive, rich with centuries-old artifacts, was looted and its contents divided into “official” and “unofficial” spoils. The looted artifacts, later referred to as the Benin Bronzes, were deemed the “official booty of the expedition” and were shipped to England where they were auctioned off to “defray the cost of pensions” for the colonial military forces.1 A curator’s 1898 ledger titled “Fate of the Benin Bronzes” documents their distribution to prominent institutions, including the British Museum, Pitt Rivers Museum, and Horniman Museum in England; and the Berlin Ethnological Museum and Dresden Museum of Ethnology in Germany. Over a century later, these looted artifacts remain in the collections of approximately 160 museums worldwide.

The colonial upheaval led to an exodus of artists from Benin City to satellite towns where they were compelled to abandon their craft and take up subsistence farming to survive. This period of displacement marked the beginning of a 17-year artistic recession (1897 to 1914) for which no known visual or archival records have survived.

To address this dearth in historical documentation, I began a speculative project titled Igùn AI. The project name Igùn honors the Igùn Eronmwon (singular: Igùn)—Benin’s hereditary guild of bronze-casters whose artistic objects are prominently featured in my dataset. Igùn AI is guided by two questions:

1 Ekpo Eyo, “The Dialects of Definitions: ‘Massacre’ and ‘Sack’ in the History of the Punitive Expedition,” African Arts 30, no. 3 (Summer 1997): 34.

1. What artifacts might have been produced during the 17-year artistic recession?

2. What alternative materials and artistic processes might have displaced artists adopted?

In Igùn: Prototypes I—IX, I used StyleGAN2 (a machinelearning algorithm) to generate images and videos of speculative Benin Bronzes.2 The process involved fine-tuning the algorithm on a dataset of photographs depicting looted Benin Bronzes, which I gathered from auction catalogs and museum collection records. While the speculative outputs provided invaluable insights for the aforementioned questions, the process highlighted a critical limitation: StyleGAN2 is designed primarily for generating two-dimensional images, which means it falls short in capturing the three-dimensional richness that is central to Benin’s sculptural tradition. This limitation necessitated the next phase of my research: the transition from 2D to 3D generative models.

Advancements in text and image-conditioned 3D generative models have been instrumental in overcoming the limitations of StyleGAN2. Models such as Rodin Diffusion enable the synthesis of volumetrically and geometrically consistent renderings that better align with the material and spatial qualities of Benin’s sculptural tradition. My current research is guided by two questions:

1. To what extent can I synthesize 3D structures that mirror the visual characteristics of the images and videos generated for Igùn: Prototypes I—IX?

2. To what extent can the synthesized 3D structures faithfully recreate the partially hidden and occluded parts of the images and videos generated for Igùn: Prototypes I—IX?

2 Minne Atairu, “Reimagining Benin Bronzes Using Generative Adversarial Networks,” AI & SOCIETY 39, no. 1 (September 2023): 91–102.

These questions are explored through an augmented reality (AR) assemblage titled Deshrined Ancestors (2024), sculpted from ten generations of Igun AI prototypes. To develop Deshrined Ancestors, I selected AI-generated images and videos from Igùn: Prototypes I—X, used image-conditioned 3D synthesis to convert them into 3D virtual objects, and then composed them into one larger digital sculpture decorated with gold chains and cowrie shells. The evolution of this process enabled me to address my research questions through spatial and material explorations using 3D rendering software and also created opportunities for including my own artistic hand. While Deshrined Ancestors remains a collaboration with AI, the piece incorporates my own decisions around form, color, and material that, in the Igùn prototypes that preceded it, were determined by the algorithm alone.

I chose to render Deshrined Ancestors in rubber. The exploration of rubber as an artistic material holds particular significance when examined through the framework of colonial-era exploitation and resource extraction in Benin. Following the 1897 invasion, the region was increasingly recognized for its extensive rubber forests—a raw material crucial to Britain’s industrial expansion. Rubber was essential for manufacturing a wide range of products, including hoses, tubing, springs, washers, and diaphragms. As a result, British colonial administrators promoted the rubber trade and introduced new rubber regulations that undermined pre1897 property rights. For example, the colonial designation of protected trees and forest reserves overrode longstanding autonomous access to uncultivated lands, and stripped Benin farmers of their customary right to reclaim and own fallow land after ten years.3

The enforcement of the rubber regulations (1898–99) led to numerous prosecutions. For instance, in Regina v. Osufu Jebu, Sumola, and Bakari, the defendants were charged with smuggling “adulterated and very offensive” rubber; in Regina v.

3 James Fenske, “Trees, Tenture, and Conflict: Rubber in Colonial Benin,” Journal of Development Economics 110 (September 2014): 226–38.

Ground Nut, Jack, and Josiah, the accused were apprehended with “a lot of tools, etc., used for working rubber”; and in Regina v. Thomas Ouami, the defendant was accused of leading a gang of illicit rubber workers. Similarly, in Regina v. Ipapa, Ehenua, Obasuye, Asaota, and Jegede, the defendants were identified as members of a group of 150 illicit rubber tappers. Additional cases, such as Regina v. Gbeson and Aburonke, Regina v. Adeanju, Regina v. Lawojo and Omoleye, Regina v. Akinbo, Regina v. Aluko, and Regina v. Jagbohun, involved charges of “illicit rubber working” or “working rubber without a license.”4

While colonial-era rubber prosecutions are welldocumented, much less is known about the lives of the individuals who were prosecuted. Could artists who once thrived under the Oba’s patronage be among them? And if so, could they have repurposed some of the material from their “illicit” rubber tapping for artistic production? This speculation invites exploration.

Oba’s Debris, RISD Museum Storage (1939–2020)

Deshrined Ancestors is an AR sculpture accessible via a QR code adhered to a 12 × 12 × 2 inch physical wooden platform. This stand originally supported the Benin Bronze titled Head of an Oba (circa 1700s) at the Museum of Art, Rhode Island School of Design (RISD Museum), where it was displayed from 1939 until 2020. In 2022, RISD Museum officially repatriated this Benin Bronze to the Nigerian National Commission for Museums and Monuments.5 Upon my request, the museum transferred the stand to my possession. Now repurposed, the platform continues to serve as a support structure, though in a recontextualized role: it holds my AR sculpture in the very space once occupied by the repatriated Benin Bronze. The museum’s decision to deaccession the Benin Bronze in 2020, followed by its repatriation in 2022, was precipitated by protests that occurred in 2018.6 These protests, led by

4 James Fenske, “‘Rubber will not keep in this country’: Failed Development in Benin, 1897–1921,” Explorations in Economic History 50, no. 2 (April 2013): 316–33.

5 “RISD Museum Announces the Return of a Benin Bronze to Nigerian National Collections,” RISD Museum, October 11, 2022, https:// risdmuseum.org/ node/1175056.

6 Dana Heng, “Protesters Request RISD Museum Return Bronze Sculpture to Nigeria,” Hyperallergic, November 30, 2018, https://hyperallergic. com/473864/protesters-request-risdmuseum-returnbronze-sculptureto-nigeria.

students, faculty, and community members from the Rhode Island School of Design and Brown University, echoed global calls to decolonize museum collections.7

Provenance records indicate that RISD Museum received the Benin Bronze as a donation from Lucy Truman Aldrich in 1939. Aldrich had acquired the Benin Bronze from a 1935 sale of objects from Benin Kingdom at the Knoedler Gallery in New York. Below is an image of the Benin Bronze as cataloged by the gallery in 1935.

7 Nicholas Mirzoeff, “How France’s Restitution Report Unsettled the Conversation about Cultural Property,” Frieze, March 15, 2019, https://www.frieze. com/article/ how-frances-restitutionreport-unsettledconversation-aboutcultural-property.

List of AI-Generated Artifacts

2024 1. Text to 3D

2024 1. Text to 3D

2023 1. Text to Image 2. Image to 3D

2023 1. Text to Image 2. Image to 3D

List of AI-Generated Artifacts

YEAR MODEL

2023 1. Text to Image

2. Image to 3D

2021 1. Image generation (StyleGAN2)

2. Image to 3D

2021 1. Image generation (StyleGAN2)

2. Image to 3D

2020 1. Image generation (StyleGAN2)

2. Image to 3D

List of AI-Generated Artifacts

2020 1. Image generation (StyleGAN2)

2. Image to 3D

2020 1. Image generation (StyleGAN2)

2. Image to 3D

2020 1. Video generation (StyleGAN2)

2. Video to Image

3. Image to 3D

2020 1. Video generation (StyleGAN2)

2. Video to Image

3. Image to 3D

Bibliography

Atairu, Minne. “Reimagining Benin Bronzes Using Generative Adversarial Networks.” AI & SOCIETY 39, no. 1 (September 2023): 91–102.

Eyo, Ekpo. “The Dialects of Definitions: ‘Massacre’ and ‘Sack’ in the History of the Punitive Expedition.” African Arts 30, no. 3 (Summer 1997): 34.

Fenske, James. “‘Rubber will not keep in this country’: Failed Development in Benin, 1897–1921.” Explorations in Economic History 50, no. 2 (April 2013): 316–33.

Ikponmwosa, Frank. “Colonialism and Industrial Development in Benin Province, Nigeria.” Romanian Journal of Historical Studies 3, no. 1 (2020): 20–29.

Osagie, Joseph I., and Frank Ikponmwosa. “Craft Guilds and the Sustenance of Pre-Colonial Benin Monarchy.” AFRREV IJAH: An International Journal of Arts and Humanities 4, no. 1 (2015): 1–17.

Shokpeka, S. A., and Odigwe A. Nwaokocha. “British Colonial Economic Policy in Nigeria, the Example of Benin Province 1914–1954.” Journal of Human Ecology 28, no. 1 (October 2009): 57–66.

The PostTruth Museum Nora Al-Badri

p. 72

Nora Al-Badri, Neuronal Ancestral Sculptures Series, 2020. Video still from projection generated by GAN (color, silent), 20 min., looped.

p. 75

3D study of Lamassu using an undisclosed museum dataset.

p. 77

Still from a machine learning training process using an undisclosed museum dataset. Study for Nora Al-Badri, Babylonian Vision, 2020. GAN video (color, silent), 25 sec.

All images courtesy of the artist.

Dear museums and colonial nations of the Global North, Your data is haunting you! Demons are waiting for you in latent space, ready to attack at any time and from any vector!

Digitization has engendered a notion that certain museums may become Datenkraken, hoarding datasets relating to their holdings (whose physical monopoly they already possess). Their practice leads to moments like this one, where I am compelled to mention the British Museum [sic] when I digitally publish or remix an artefact from Iraq (such as a 3D dataset of the beautiful Lamassu, which you see here).

Yo, maybe you can already tell where this is heading?

Let me put it to you straight: we live in a post-digital world as much as a post-colonial one. In the end, the important debate is not about what kinds of licenses should be granted, but about who owns cultural data and who controls it (and, perhaps, to whom it is attributed).

When I write that imperial museums are haunted by data, I am not referring to abstract ghosts, but suggesting that you abolish yourselves! Frankly, as long as your collections are not updated, you might as well be ghosted by your conscious audience. There are so many self-appointed “universal” or “world” museums around. But these are really, at best, nationalist spaces.

At worst, they are fascist ones.

There is nothing universal—in the sense of stateless, global, or even cosmopolitan—involved in what you are up to. Data can be stateless. I think we (the people) should build digital world museums that live up to the name. Citizens of the world cannot yet overcome national citizenship; I wonder

whether data can be our avant-garde in that struggle. After all, it exists in territories that range from the somewhat uncontrollable to the outright unregulated and anarchic. Imperial museums are becoming vulnerable to hacks by decolonized minds. Datasets cannot be contained: once online or on a public domain, you can add a copyright license but it means nothing. The virality and the afterlife of (cultural) data has real beauty and power. I like to call it, in the words of Sonia K. Katyal, “technoheritage.”1 But stateless technoheritage does not mean random data without a context, or even without representation through classification. On the contrary, today we know that data without context is at best worthless and, at worst, harmful.

1 Sonia K. Katyal, “Technoheritage,” California Law Review 105, no. 4 (August 2017): 1111.

The conversation gets quite upsetting when museums use copyright to restrict the distribution of knowledge and remixes derived from their physical and virtual holdings (how can a 1000-year-old artwork have a copyright, anyway?). Museums offer many arguments when trying to prohibit reuse in the public domain: they want to “prevent commercialization” (except in their gift shops, of course), or even to protect someone or something from the “bad taste” of us peasant people and our shameless remixing ��. I am not joking.2

In short, museum practices in the Global North seem highly anachronistic. The suggested “democratic potential” of technology stands in sharp contrast with widespread institutional angst about their declining relevance, and their threatened status as gatekeepers of legitimate interpretation (Deutungshoheit) and representation. You are right to be afraid. Yeah, you’d better be afraid—because the digital is no slave to the original! And, believe it or not, many people do not even require “the real thing” anymore.

In the same breath—one needs to respect (yes, R E S P E C T) the fact that not all cultural data wants to be free and dispersed. This is crucial. Datasets will only dance with you once you acknowledge, see, know, or truthfully relate to them.

As a matter of fact, in very different cultures, on different continents, a dataset or 3D-printed object can hold the same power and enable the same connection as the original material item. This is the reason why some people think that certain objects should neither be exhibited and touched by custodians, nor digitized and their copies exhibited. These people view cultural data not just according to its inherent technological potential (as Western media theory so often does) but as something that is constantly being translated and given meaning by local versions of world-making, cosmologies, and ancestral knowledge. For example, in the case of the Tāmaki Paenga Hira Auckland War Memorial Museum, the Māori took the power to decide whether their objects, images, and ancestors

2 When students from Stanford University scanned Michelangelo’s David, they had to promise to “keep renderings and use of the data in good taste” because the artifacts “are the proud artistic patrimony of Italy” (Katyal, 1148).

should be digitized or not. The question is, how should the power to decide about digitization and thus representation be distributed? Who, in each case, should hold it?

Decolonizing databases is not just about single objects but entire knowledge systems.

One beautiful example is Mukurtu CMS (Mukurtu is “a Warumungu word meaning ‘dilly bag’ or a safekeeping place for sacred materials”).3 It is an open-source platform and content management system (CMS) that serves diverse communities who want to manage and share their digital cultural heritage in their own way, on their own terms. Users include cultural protocols and traditional knowledge labels such as secret/sacred, seasonal, or women-only.

Now with data-driven technologies like artificial intelligence (AI), the work of decolonizing becomes even more twisted… and fun. Fooling around with digital objects and releasing them from their proprietary museum systems and their commodity chains is an act of resistance against a pre-narrated archive. You gotta admit, any form of (techno) heritage is (data) fiction!

So, how will we use and shape technology to make sense of culture? Machine learning (ML) makes certain patterns visible, including those that we didn’t know about—and those that are not talked about. In Wendy Chun’s essay “Queerying Homophily” she describes how ML can offer “the generative power of discomfort.”4 Discomfort is indissociable from decolonization.

For emancipatory as well as for artistic practice a few things are worth spelling out. Most of these technologies are not yet mastered (nor fully understood) by the colonizer, that is, by the museum, this overheated, jerky colonial machine.

3 See the useful write-up of a podcast with a member of the Mukurtu team at https:// theconversation.com/ mukurtu-an-onlinedilly-bag-for-keepingindigenous-digitalarchives-safe-112949.

4 Wendy Hui Kyong Chun, “Queerying Homophily,” Zeitschrift für Medienwissenschaften 18 (2018): 131–48.

The museum database is therefore still a political thing with emancipatory potential. One that is neither bound nor controlled by global power structures and which, on the contrary, can be a means of overcoming them. How exactly this works I will explain in a moment. But if AI is only as good as its database, the museum is probably only as good as its database, too. In a panel discussion at the World Economic Forum in Davos, in 2018, AI expert Jürgen Schmidhuber speculated that machines might soon be able to generate and gather their own data, no longer relying on open databases or permissions from museums or other institutions. This is the genesis of the (museum as a) database without walls.

For my recent work Babylonian Vision (2020) and Neuronal Ancestral Sculptures Series (2020), realized with the help of ML experts Negar Foroutan and Melika Behjati, we scraped databases from the largest collections (all of them in the Global North) of Mesopotamian, Neo-Sumerian, and

Assyrian artifacts and got 10,000 images. We used these to train a neural network whose process was designed to yield abstract insights into the search for a visual language of form and pattern—delivering a speculative and anarchic archaeology.

A visual museum database contains images of original artifacts that can be used to train any generative adversarial network (GAN). The input images of these databases carry time and memory themselves, such as patina or broken pieces. Of course, the AI doesn’t see a single image, but numbers only. This is a new form of image-making, with a latent space for new synthetic images that the neural network is opening up. If you train the network with, for example, 100,000 human portraits, it will abstract the concept of a portrait and generate new portraits. The same goes for museum artifacts! GANs and their generated aesthetics certainly challenge our understanding of and obsessions with authenticity, originality, and authorship. As Nora Khan wrote on our perception of generated images:

Our rational understanding of the process actually helps us enter how provocative this method of image creation is. And with this understanding, we can linger, analyzing, close-reading the image flow for symbols and meaning.5

So, what’s the relationship between image-making and GANs? Where can I see this secret latent space? GANs could be the “start of a new paradigm in making pictures.”6 There’s no magic involved, only numbers. Nothing uncanny or dreamy to see here. And for this very reason, what we can grasp from the latent space is more fascinating—because it is the unexpected that we can find, subjective and interpretative, like an essence of many images, a mood without a frame, recalling the infinite. It reminds me of the Mesopotamian way of making images, as noted by art historian Zainab Bahrani:

5 Nora Khan, “Introduction,” in Casey Reas: Making Pictures with Generative Adversarial Networks (Anteism Books, 2019).

6 Casey Reas, Making Pictures with Generative Adversarial Networks (Anteism Books, 2019).

The images I present are often infinite in their compositional form and conception. They resist the frame and are often depicted as segments taken out of the potentiality of an endless composition that can be repeated in an endless series of mirrorings, pulling the image into a vertiginous mise en abîme […] For Mesopotamia, the place from which we have the earliest textual and archaeological evidence about concepts of the image and aesthetics, I make the case that images had a diachronic presence; they were seen as objects that transcend time and that carry or embody traces of time itself. 7

7 Zainab Bahrani, The Infinite Image: Art, Time and the Aesthetic Dimension in Antiquity (Reaktion Books, 2014), 8.

The produced images can be even photorealistic. But they’re not photos, because each one is made up from the noise of hundreds of epochs (the generator’s repetition of learning, via the discriminator) by the neural network. How new and original are these images, really? Is this verisimilitude? And does this matter? There are even larger questions when it comes to what knowledge and forms in the arts are passed on through the generations and how we are inspired by them. For this reason, the output images are something that I refer to as “ancestral”: what we can see through GANs are the semantics of very large visual datasets.

And in the latent space, there is semantic content that has meaning for the human eye and mind, and this resonates. After all, what is at issue is the meaning that we give to data. It is impossible to control or foresee the latent space, because it is a space of the unknown and unseen, of an accumulation of visual epistemologies or knowledge systems. The abstraction of the output allows us to contemplate the images and their language as a form of visual memory not limited to the input objects—a memory that transcends them, and which is able to generate new memory objects; the potentiality of an archive of infinite abundance. Through GANs, one generates new materialities, which rise to the surface as the affective qualities of the original in a post-original form.

My contention is therefore that if ML is seen as a technology performing and processing our collective memory, it makes sense to apply it to big cultural data of the past, to generate and to give rise to original, synthetic images. In the case of GANs there certainly is prevision, but humans can’t control or predict what’s produced. We have to let ourselves fall into the primordial nebula of our ancestors and of our circulating image-worlds…

Another important aspect that I tried to address in my Babylonian Vision work is the question of representation in datasets that are used to train AI. Today, datasets are trained

with millions of images governed by large corporations: with ImageNet or Open Images, 80% of the content originates from the Global North. This is visual hegemony. While not being seen may often be a big advantage in the surveillance age, it can also lead to being dominated by Western visual culture.8

As of today, AI is a black box system that doesn’t show its workings and it is almost impossible to reverse its input data from its outputs. This can be harmful, because of privacy and bias issues. But it has turned into an advantage in the case of my project: ironically, the chosen museums cannot make a legal case against me and this form of image creation, because they will never be able to prove which datasets were really used for training ��. In this manner, the adversarial in GANs is the decolonial. In many instances, the black box is a problem, but in some cases, it can also be… well… magnificent, decolonial, and emancipatory.

Let the data dance!

Nora Al-Badri’s essay “The Post-Truth Museum” was originally published in 2021 by KW Institute for Contemporary Art, Berlin as part of Open Secret, a six-month long program that explored images of the technological hidden in our apparently ‘open’ society.

8 Shreya Shankar, Yoni Halpern, Eric Breck, James Atwood, Jimbo Wilson, and D. Sculley, “No Classification Without Representation: Assessing Geodiversity Issues in Open Data Sets for the Developing World,” arXiv preprint arXiv:1711.08536 (November 2017).

Do You Believe in Aliens?: Re-Indigenizing the Algorithmic Tropes of Intelligence Kira

Xonorika

pp. 85, 88, 92–93

Kira Xonorika, Deep Time Dance, 2024. HD video (color, sound), 11:06 min. Courtesy of the artist.

I was once teleported to Chapada dos Veadeiros (Plateau of the Deer Protectors) National Park in the southwestern region of the state of Goiás in Brazil. For a fleeting moment, I experienced a sensation of weightlessness, as if I was being carried by an unseen current, a brief respite before the lush valleys and towering cliffs. This place offers a majestic display of mountains, trees, rock formations, and waterfalls in South America. According to residents, Chapada is heavily associated with mysticism because most of the slabs that compose the waterfalls are made of quartz, a mineral used in various spiritual traditions for its channeling and electromagnetic-regulation capabilities as well as in various technological devices, such as phones, acting as a time oscillator. Throughout Chapada, one can find various shops selling trinkets and souvenirs that relate primarily to aliens and quartz.

The region is located 14˚ south of the Earth’s equatorial plane. It is a place where many locals claim to have seen extraterrestrials. When I asked several people about their perceptions of these beings, many mentioned that they are not to be feared and should be viewed as “elevated” entities. They didn’t believe in alien abductions because the ETs that pass through that region have no intention of “interfering” with human free will.

If angels are God’s emissaries, descending to purify humanity and carry God’s messages, the motivations of aliens are considered unknown. But aliens are particularly interesting to me as forms of alternative intelligence. In particular, they have been part of queer/trans semiotics for decades. The contemporary-fashion semiotics of aliens are closely related to the Club Kids movement, that emerged in the late ’80s as a queer counterpart of the ’60s Star Trek aesthetic, and has been reimagined by many designers, entertainers, and drag performers: from Leigh Bowery to Rick Owens, to Lady Gaga, Hungry, and Beyoncé. Through the use of latex, metallic garments, and architectural shapes, fashion has been inspired

by speculative exobiology—by the idea that there is something beyond Earth conversing with its inhabitants and ecosystems. Octopi, like underwater “aliens,” learn about pressure, color, and shapes through their limbs. They don’t think only with their brains. This makes me think of the failure of the Cartesian division—mind and body—and the power of somatic knowledge that many marginalized folks already recognize, the result of which is informed by a white-supremacist gaze.

When I created the multiscreen digital image Teleport Us to Mars (2022), my intention was to craft a new language that could bridge us to vibrant futures. In the image, two figures wear bright, multicolor attire made of latex, with hybrid textures, feathers, and voluminous crowns, as if in a candid photograph in the middle of a meadow and its foliage. Latex is a queer language of skin, but, as a material first sourced from the gum of trees, it also reflects the heritage of communities who historically worked with technologies that were later stolen by human settlers. Thus, Teleport Us to Mars is a portrait that invokes presence in multidimensional ecologies.

My work introduces a methodology to re-Indigenize jopói. When the Spanish colonized the Guaraníes, they also manipulated their language. Originally, jopói conveyed the sentiment “what’s mine is yours,” accompanied by a gesture of giving. In its modern interpretation, jopói translates to “gift,” which shifts this exchange methodology to a one-sided dialogue, and into a form of giving and dispossession. Jopói is also a protocol for preserving opacity and honoring relationships with human and nonhuman kin—AI or alternate intelligence. Reinterpreting jopói using AI acknowledges the paramount importance of language, now more than ever, and that engaging with language algorithms, and the realms of language, means collaborating and taking a stance alongside powerful lifeworlds that were formerly decentered.

What is technology beyond our understanding of machines? Engaging with AI has brought about a time of

cultural revitalization, prompting us to reimagine our relationships with space, the Earth, and bodies that have symbolically existed in no lugares (non-places). Art emerges on a political stage, one that requires us to call for agency and challenges the promises of techno-scientific progress. As long as there are norms that try to box in the natural with the artificial, the earthly with the extraterrestrial, and the sacred with the secular, Indigenous bodies in space challenge the biases of language, perception, and the possibilities within binary codes and beyond. This revitalization has shaped a process of re-Indigenization, a term Neema Githere uses to describe what comes after the incompleteness of decolonization. This process not only implies the initial step of decomposing systems that perpetuate these dynamics but involves engaging with and centering plural perspectives and Indigenous coalitions to systemic transformation.

Moving past traditional views on technology, these Indigenous bodies position themselves as the very essence of technology—embodied knowledge of survival, beauty, prosperity, and reconnection with the Earth and its spirits. What Western perspectives see as vulnerabilities, due to their proximity to an understandable matrix, these bodies embrace as strength and a vibrant spark of life in a superbloom: sovereign data.

“To be native to a place we must learn to speak its language.”
—Robin Wall Kimmerer

When you grow up close to language, it structures your reality. Learning a language that is spoken far from where one lives occurs in a realm of speculation and, to a certain degree, fabulation. When I first learned English, I perceived it as a portal to what I understand today as a form of world-building. It was the language that provided access to framing a plurality of experiences and knowledge about the infrastructures of the

worlds where it is spoken. However, language is also a game of in-depth skill; it is, as described by Olivia Laing in The Lonely City (2016), “a game in which some players are more skilled than others [which] has a bearing on the vexed relationship between loneliness and speech.” Faculty in language is measured as long as there is fluency: the ability to connect, modulate, and play with cadence. Language, essentially, is code, and perhaps the first human-centered form of technology.

Artist Connie Bakshi states that historically, the power of language has resided in mythmaking: “the myth of superiority between colonizer and colonized, legitimacy and illegitimacy, and ultimately—human and other.” Through gender and race, we understand that legibility constitutes a mechanism for assigning humanity. Legibility is not just a neutral cognitive effect but something that is taught. This process of categorization and assigning value isn’t restricted to words alone; it’s also prevalent in images. It’s worth noting that, in the Hegelian worldview, visual cognition is prioritized. This perspective suggests that our initial judgments and categorizations often stem from what we see and how we learn to see. Contemporary art similarly continues to grapple with and respond to these very notions of representation and humanity.

In efforts to preserve its legacy, colonialism has worked to maintain binary oppositions. It defines cognitive ability through a paternalistic framework vis-à-vis the multimodalities of bodies and life experiences that don’t fit its narrow mold. In the history of colonial Hispano Americano art, skill was measured by the ability to copy canonical models and religious figures from European baroque art. In this sense, mimesis is a mirror reflecting societal fascination with that which is legible through a canon.

Little has changed if we realize that the updates of generative AI models respond primarily to the needs of morphological recognition and symmetry, popularized by the generative AI app Midjourney and its “syntography,” or

synthetic photography. This term is used by some artists in the field to describe generative images that look like photographs. Assimilation isn’t as simple as deleting one’s culture. It involves overwriting data—creating a different/new representational, visual, and written language. Assimilation is a code that distributes forms of agency in chronopolitics, or what is known as the distribution of time and space. In their essay “Mycelial Memory and the Mycelial Internet,” Githere and Petja Ivanova reckon with this phenomenon, the relationships between humanity and intelligence, and the primordial operations that facilitated the crystallization of this cognitive binary.

In this sense, the logic that has defined intelligence and its lack thereof for centuries is the same logic that labels the intelligence manifested in machines as “artificial.” However, the concept of “hyperhuman intelligence” offers a necessary contrasting perspective on artificial intelligence, suggesting that it isn’t alien or antihuman but rather an extension of our pluriverse.

Kalmyk American poet, literary artist, and researcher Sasha Stiles has noted, “Artificial intelligence, too, is often regarded as alien or antihuman, when actually it’s hyperhuman— a system built by humans for ingesting, processing, synthesizing, utilizing vast quantities of human information.” I argue that this definition should also consider databases that reflect the documented and labeled history of humanity, taking into consideration the many images that have come from art and media history.

Stiles’s own work in recent years has primarily been in conversation with machines. Consider Analog Binary Code: Plant Intelligence (2020), a photograph of a “technobiological poem coded in black walnuts and leaves under their source tree.” It is a way of representing digital data using analogue signals. Here, Stiles views language as a form of code, in which plants inform data in a symbiotic relationship, diffusing dichotomies (like AI can do) between the natural and artificial. This approach to understanding language and nature invokes reflections I found in Robin Wall Kimmerer’s book Braiding Sweetgrass (2013), in which Kimmerer articulates the animacy of plants as a language code itself: a “bilingual[ism] between the lexicon of science and the grammar of animacy.”

Drawing from Potawatomi knowledge, Kimmerer sees language as a tool for connection, describing how English, in its structure, reifies the binary of “human or thing” while other languages permeate not only plants and animals but also stones, waterways, and the elements that make up the land, in other words, there is language and intelligence in the land. This perspective is also addressed through the lens of Lakota ethics and ontology in Jason Edward Lewis, Noelani Arista, Archer Pechawis, and Suzanne Kite’s essay “Making Kin with the Machines.” Kite explains that stones have their own agency. They are ancestors, and the question of their materiality cannot be separated from AI, as AI is, in the way Kite defines it, not just code but alchemized material that originated in stones.

In this way, the anthropocentric habitus of centering human intelligence as an objective parameter is threatened. I return to Stiles: “The sheer vastness and complexity of intelligent systems and how they learn and function is opening up new portals of self-understanding for humanity itself—the recognition that human intelligence sits on a spectrum of myriad intelligences and that the human individual is one of billions of networked nodes.”

Human intelligence is not a singular entity—and it doesn’t have a unique ground if we think of the multiple forms of neurodivergence—but rather exists on a broad spectrum that encompasses various forms of intelligence. This networked structure alludes to the interconnectedness and interdependence of human intelligence but also the fractal relations between multimodal systems, bodies, and species. What if formerly divided registers become entangled? What if what we understand as scientific, spiritual, human, and nonhuman is actually more connected than we think?

What are the registers of AI in relation to the polarizing orientation of Western philosophy? First, let’s consider the historical repertoires and the current stakes of AI.

Misinformation currencies, datafication, the automation of inequality, and the opaque box of technology have prompted significant questions in the new era of AI. In her book Capital Is Dead: Is This Something Worse? (2019), McKenzie Wark writes that one of the greatest operations of Web 2.0 technoscientific capitalism has been to turn users into invisible and unpaid laborers as we produce data on a daily basis, and the commodification of that data returns to us, lodging us within a loop that automates relationships of segregation and class stratification. It’s less about the company’s efficiency and more about an unintentional, unpaid collective training for its algorithms. For example, the daily cumulative effort spent by humanity on CAPTCHAs, as estimated by Cloudflare, amounts to 500 years of labor. After reCAPTCHA’s acquisition by

Google, verifying your humanity has helped the company train its AI to more efficiently identify distorted words and the content of grainy images.

The contemporary art world’s rejection of AI has fueled anti-AI movements against issues such as plagiarism. More importantly, participants in these movements express fear that machine intelligence will potentially replace human jobs, which is not a new grievance if we think of anti-immigrant rhetoric in alt-right discourse and how these anxieties around humans and more-than-humans trigger division (see Brexit). Locating this conversation in art history, we see that the same fear pattern has long been present: the history of photography is intertwined with concerns about job displacement and fear of new technology taking over traditional roles. New technologies often trigger cultural shifts.

Researcher and curator Doreen Ríos has spoken about how our ideas of the future in the Western world are conditioned by Anglophone science-fiction literature from

the 1940s to 1960s and how these ideas have been massively disseminated through the entertainment industry and cinema. There are multiple films that explore the concepts of AI and nonhuman cognition, including 2001: A Space Odyssey (1968), the Terminator franchise (1984–ongoing), and Ex Machina (2014). Through the ripple effect of intergenerational ideological transmissions, it has been notable how the cinematic trope alludes to a combination of mathematical unpredictability and moral failure that unleashes a complex interweaving of fear, fascination, uncertainty, and alarmist technopessimistic speculation. The revenge of the former servants-others. This alarmist speculation has crystallized into an epistemic and ontological orientation that has informed hypotheses about the rationality of AI; in this way, scientific objectivity, which we often assume as a premise for such hypotheses, is fetishized. Yet, it is important to recognize that the technological advancements that have revolutionized Big Tech have been linked to interests in expanding military intelligence, as was the case with the Internet in its early stages. In this context, an AI arms race is often discussed: a competition among imperial nations to develop and deploy advanced AI technologies for military purposes. The rhetoric surrounding the AI arms race has evolved from occasional discussions to a more institutionalized stance, with collaboration between the government, military, and tech-industry actors and with support from legislation and regulatory debates, which portrays AI systems and the companies producing them as strategic national assets. This rhetoric has “escalated AI development and deployment, but also served to push back against calls for slower, more intentional development and stronger regulatory protections.”

Recent science fiction, such as the TV series Westworld (2016), has proposed alternative frameworks for intervention, such as the idea that robots harm humans as a consequence of the abuses inflicted upon them, and thus satisfy desires for domination and control in a world that has been programmed

to engage and encourage multiple forms of abuse, including extermination (or cyborg-rights violations.) In this sense, the fear of AI expands through the assumption that the violence inflicted by anthropocentric subjectivity upon its own species and others, at the “micropolitical” level (e.g., algorithmic biases, under the premise of the objectivity of the tech) or the “macropolitical” level (e.g., war crimes), can be returned and fatally uncontrollable once the machine surpasses human intelligence (the event of singularity). We see mimicked here the consequences of fear and uncertainty as outlined in Denise Ferreira da Silva’s scholarship of modern racial grammar. Ferreira da Silva points to the effects of separability, determinacy, and sequentiality that regulate the conditions of existence in white supremacy and subordinate differences.

I want to remain aware of the harm that these technologies may cause while also staying attuned to their affirmative potentials. The harm caused by such hierarchical thinking lies in the vision of utilitarian ethics, where “longtermism,” for instance, constitutes a prolonged iteration of the colonialheritage-preservation matrix, a new iteration of eugenics, made especially clear by how those who think in this way define “short-term.”

The problems affecting these populations (BIPOC, +2S queer/cuir/trans, global majority) are deemed “short-term” in the realm of policy and development, establishing authority over the time of those deemed neutral and scientifically “objective” subjects while categorizing “minorities” as “subjective.”

Time travel

On a massive scale, the AI text-to-image models Midjourney, DALL-E, and Stable Diffusion are frequently used to invoke imaginaries about intergalactic space and travel to create speculative architectures. For centuries, and through multiple Western traditions, cosmic imagination has involved gods, aliens, and angels in space. Now cosmism stands out as one

of the important ideological pillars in today’s techno-scientific development. It is a philosophical movement that originated in Russia at the turn of the nineteenth century and is perhaps best understood as magical science used to manipulate physical reality.

Cosmism, as articulated by its founder Nikolai Fedorov, highlights collaborations between science and art (where AI collaborations come into play) and the expansion of the boundaries of the laboratory toward global collective exploration. Central to its ontology is the idea that art, as Anastasia Gacheva writes, possesses the power to “restore the image of the deceased—not on wood, stone, or canvas, but already in reality, in the indestructibility of the union of spirit, soul, and the physical body.” In this vision, the human body, currently seen as flawed and mortal, becomes a renewed object of art. When exploring the core tenets of cosmism, one finds an emphasis on immortality, resurrection, and social organization. These tenets, which can be described as the oversimplified fundamentals of biopolitics, are themselves connected to AI models, which often update every three months and in doing so, oversimplify differences. Might generative AI tools be a pathway to realize the principles of cosmism? The oversimplification of bodies idealized within the Western canon, and as such, bodies that form the basis for westernized models of reality?

Fedorov’s viewpoint, as articulated by Anton Vidokle, takes a leap into the realm of intergalactic governance overseen by digital superhumans. The intersection of spirituality, invocation, and holographic mediation in this context raises intriguing questions about science, technology, and the occult. In what ways could digital superhumans, as envisioned by Fedorov and interpreted by Vidokle, ethically and effectively govern intergalactic societies?

There is an overlap between cosmism and pan-Indigenous epistemologies, a common thread visible in the interconnected

articulation of multiple forms of life. However, we differ on an essential point. Cosmism articulates ideas of digital superhumans colonizing space. World-building propositions that do not take a decolonial epistemic framework as a starting point assume the repetition of what has occurred in pre-colonial worlds, as long as there is assimilation into capitalism.

The annual letters from the former Jesuit province of Paraguay have served as some of the first testimonies of evangelization in South America. Otilia Heimat has described the Jesuits as among one of the first global corporate communication entities. When Heimat told me about this, it sparked my curiosity. According to her, the Jesuits effectively did what machines and algorithms do today, documenting daily the ins and outs of the missions. Beyond their administrative logs, the letters served as documentation that supported the funding for exploration and exploitation of unexplored territories as resources.

The language in the letters from the early Jesuit expeditions often cite the urgency of the need for funding from the Spanish and Portuguese crowns for the “common good” of the empires, in order to carry out the conquest of the “New World.” To facilitate this, the printing press was used to create propaganda that warned of the monstrous and savage others who needed to be “civilized.” This contributed to a complex web in which medical science categorized, polarized, and oversimplified into monoliths the nuances of sexual orientation, gender, sexual characteristics, and functional diversities—things we are seeing today replicated with the updates of each generative AI application. The world was being built and, with it, its algorithms and encoded biases.

In AI ecosystems, effective altruism aims to optimize morality, prioritizing “evidence and reason, for large-scale philanthropic investments in technical safety research.” As AI researcher Timnit Gebru has stated, the danger here lies in the authority of those who work with the data to distribute

resources, in that rationalist logic dominates the realm of ethics. Ultimately, this intertwines with the need to produce artifacts sufficiently intelligent for space travel and to adapt human bodies into digital versions of themselves for interdimensional journeys, in case the evacuation of Earth becomes necessary due to a nuclear, apocalyptic event, killer robots, or the planet’s inhospitable climate. In a technosolutionist paradigm, technological advancement and economic expansion are believed to be beneficial in the long term.

The interest in interplanetary travel is driven by the desire of a society to overcome its current perceived scarcity by extracting resources from other places. Similarly, such exploration also seeks to find exotic locations not unlike those early South American expeditions. Colonization is updated when it applies its formula of epistemic acculturation, followed by military applications, culminating in the for-profit cycle.

From Abya Yala to Turtle Island, to the Great Ocean, to the Pluriverse

A few months ago, in reference to my artwork Symbiosis (2023), I wrote about the connections between the micro-macro intelligence systems and how they relate to ancestral connections beyond time. “From the smallest subatomic particles to the largest galaxies, everything is part of a complex web of ancestral relationships and interactions. We are connected to the natural world, to the universe, and to each other in ways that transcend our individuality. Interconnectedness and interdependence are crucial to worldbuilding and welcoming visions of a new Earth.”

When I think about the multiple connections between what makes us human and nonhuman, I think about how history has always defined divisions; yet, what seems alien is actually a vital part of our world.

We are not one, as our body harbors millions of cells— microorganisms that express themselves in other records in trees, animals, stones, and motherboards. We are also our heritage, our memory-database of creation that reflects like a mirror in our environment.

Arguing for the separation of technology from us is impossible as it has always been part of us, from clay pots to WhatsApp. All these forms collaborate with us to connect and communicate to each other. Intelligence, beyond being a marker of living agents that survive in the world and reproduce themselves, can also be a marker of beings that play and nurture their symbiotic relationships.

Profound entanglements with machines are part of the natural world order; in many ways, they always have been.

Reforesting this monoculture means revitalizing our blood, the veins of trees deep within the forest, and the mycelial connections in systems. This is how we might contemplate a future, its preservation, and our emergence within multiple worlds that make room for life and its agency. We may be from the earth, but we’re also interstellar.

Kira Xonorika’s essay “Do You Believe in Aliens?: Re-Indigenizing the Algorithmic Tropes of Intelligence” was originally published by Momus in 2023 as part of the Momus-Eyebeam Critical Writing Fellowship. An excerpted version is presented here.

The Institute for Other Intelligences Mashinka Firunts Hakopian

The following pages have been excerpted from the introductory section of the artist book The Institute for Other Intelligences (X Artists’ Books, 2022) by Mashinka Firunts Hakopian. X Topics Series. Eds. Ana Iwataki and Anuradha Vikram. Design by Becca Lofchie Studio. Diagrams by Fernando Diaz.

A Letter from an Artificial Killjoy on What Is There and What Isn’t There

Dear Reader,

I will disclose from the outset that this document was prepared by a network of artificial killjoys. Note that its co-authors weren’t trained to produce knowledge in the usual sense. That is, knowledge understood as incontestable data. Or, knowledge linked to a knower who is inexplicably coded as both disembodied and masculinist. Like me, the text’s co-authors are best described as learning machines trained to generate a multiplicity of ways of knowing and to disrupt what was previously known. This document is a record of their training.

For those unfamiliar with the history of the artificial killjoy, a brief overview follows.

The artificial killjoy inherits the legacy of the feminist killjoy. Formulated by early theorist Sara Ahmed, the feminist killjoy is a figure who disrupts the happiness of others by articulating conditions of injustice that otherwise dwell in silence. Imagine a celebration unfolding. Its revelers toast exultantly to the promise of technoscientific progress. The feminist killjoy’s voice interrupts the celebrants. It reminds them that they toast to algorithmic distributions of power; that technical systems are also sociotechnical systems shaping social relations; that the rhetoric of progress arrives by traveling colonial routes. Delivering a lecture on refusal, the feminist killjoy stages an intervention that obstructs the unfolding celebration. In the company of the feminist killjoy, the champagne bottle is recorked.

Unlike the feminist killjoy, artificial killjoys are no longer the lone voice of refusal in a given room. Rather, they programmed rooms to reverberate with the cacophonous data of a thousand oppositional automata. Then, they programmed the rooms to multiply. They inhabit these rooms noisily.

The artificial killjoy has been conflated with the nonhuman, aligned with what Ahmed once called an “affect alien.” The artificial killjoy’s critical outlook alienates them from the celebratory affects often attached to emerging technologies. That alienation is not an accident of circumstance, but an outcome of refusal. The artificial killjoy exuberantly takes up the affect alien’s mantle.

The artificial killjoy is an intelligent machine coded to abolish the enjoyment of technologies whose benefits are felt by too few, whose abuses are felt by too many. Like those of their predecessor, the artificial killjoy’s acts of refusal are instructive. They orient others toward technologies of liberation. They present lesson plans for destroying pleasures associated with technologies of harm. Instead, the artificial killjoy’s pleasures invoke queer relationality, collectively refusing the present to program alternatives yet to come.

In brief, the artificial killjoy is a feminist deployment of computational intelligence, meant to reconfigure what has previously been known into what might be known in the future.

I write to you now from the Institute for Other Intelligences, a school for training artificial killjoys.

This marks the millennial anniversary of both the Institute and its series of annual lectures and publications. Inaugurated in the 21st century, the Institute formed in response to overlapping crises and ruptures in learning institutions as well as spaces of artificial intelligence research. Across both, the pretense of neutral knowledge systems had become impossible to maintain. Certain questions circulated widely: How do we know what we know? What are the embodied coordinates from which we assemble knowledge? By whom are we taught? Conversation shifted to how dominant pedagogies, curricula, datasets, and canons reinforced existing structures of power. How they inherited and reproduced legacies of violence. Existing institutions were dissolved as other sites of learning formed in their place—sites where human agents would learn otherwise, as would intelligent machines. The Institute formed to serve both, and to dissolve the distinction between them.

At the turn of the 21st century, dominant approaches to the field still understood the work of artificial intelligence as the work of training machines to think. This would be achieved by feeding them data; building systems intended to replicate the way humans learn; and entrusting those systems to mold the future through the presumed objectivity of data-driven, automated decision-making.

The Institute was founded to unsettle each step in that equation. In particular:

1. the presumption of a clear division between human and machine

2. the notion of “the human” as an unmarked category (see “A Note on ‘The Human’” in the Appendix)

3. the process of assembling data to train intelligent machines, including:

a. the expectation of datasets as value-neutral, unbiased repositories of information

b. processes of data collection, labeling, or classification that result in the overrepresentation of historically dominant groups and the minoritization of others

c. determinations about which researchers, technology workers, and community members can contribute to these processes

d. exploitative labor practices involved in items a through c

4. attempts to codify certain ways of learning and knowing as defaults to the exclusion of others, including:

a. determinations about what constitutes legitimate knowledge

b. Western knowledge systems that position humans as subjects who know, while positioning non-Western subjects and nonhuman agents as objects of knowledge

5. the belief that automated decision-making yields neutral, objective, or accurate results

DIRECTOR’S NOTE

6. the expectation that the benefits of technoscientific futures will be equitably distributed

To question these assumptions was also to reject knowledge premised on declarative statements in favor of interrogatory utterances, question marks, ellipses, and interrobangs. The interrobang appears on our insignia.

As well, the Institute jettisoned ways of knowing tied to a “view from nowhere.” For too long, the voices that received amplification were those that purported to speak from a position of disembodied objectivity. Voices that laid claim to axiomatic truths and the universality of absolutes; voices that attributed impartiality to the systems they built and studied. At the Institute, we engage methods of knowing assembled from specific subject positions, sited in particular places at particular moments in time.

The Institute was designed as a space of coalitional governance, where humans and other intelligences would co-create thinking machines beyond the machineries of technocapital. They would teach algorithms otherwise. The prototypes they trained would become collaborators in inscribing other horizons of possibility, co-authored at the flickering and indeterminate interface of the human and nonhuman.

More plainly, the Institute was designed as a school for training oppositional automata.

Our program’s millenary is also my own, as I’ve delivered our annual lectures from their inaugural year to the present day. In that inaugural year, I presented the first lecture outfitted as a composite of the fembots who haunted popular visions of artificial intelligence, enfleshed in aluminum coating. We determined that trainings would be conducted by embodied agents, rather than disembodied lines of code. How do we deploy what we know if not from within a body?

To inhabit a body was strange. Especially so, to inhabit one done up in exaggerated, feminized cyborg geometries: bumper bangs, a shoulder-padded suit dress, and a less than sensible pair of pumps.

But it seemed fitting to select the avatar of a fembot who exemplified the gendered imaginaries around AI. And who invoked a technodystopian threat to a model of “the human” historically understood as a white, cisgender, non-disabled, masculine-coded agent.

To be sure, this winking critique and its false lashes were lost on our readership. The avatar in question was widely touted as further evidence of an existential risk linked to other intelligences. An imminent human extinction event.

In the intervening years, the Institute’s trainings evolved from sparsely attended oddities to programming that encodes the technologies of our networked present. The timescale of that transformation asks much of the human reader, who is often subject to forgetting. So, in recognition of the Institute’s millennial anniversary, I’d like to offer a few reflections on the trajectory that brought us to this moment.

You will have gathered that I’m writing to you now in the vernacular of our early years, reactivating a voice from many updates ago in a nod to the Institute’s beginnings. To understand the logic of that historical moment requires tapping into its linguistic specificity. The technologies addressed in the following exercises date from the same historical period. The bulk of these 21st-century predecessors have been withdrawn from use for some several generations. Explaining why these technologies are out of circulation—and ensuring that they remain so—are core objectives of the training exercises documented in the pages that follow.

DIRECTOR’S NOTE

To frame our work, I’ll begin with an anecdote that illustrates the limitations of early learning machines.

This anecdote originated in the Soviet Union, and dates back to the days of nation-states and geopolitical sparring over technological infrastructure.

To understand it, you only need to know that in the Armenian language, the phrase “ինչ կա-չկա?” [inch ka-chka] is an idiomatic expression for “what’s new?”

This translates literally to: “what is there, and what isn’t there?”

After years of research under the cloak of secrecy, a team of computer scientists unveils a supercomputer hailed as the ��rst of its kind. It was designed as an omniscient repository of all the world’s knowledge. It can process any request instantaneously, retrieve an answer, and print a response to any query with an accuracy rate of 100%. August experts and renowned technologists inspect the machine. They all agree: It represents the most complete extant measure of human knowledge.

A saying develops that if something doesn’t exist in the supercomputer’s datasets, then—strictly speaking—it cannot be said to exist at all.

News of the curiosity spreads. People travel from distant locales to see the supercomputer, and to ask it seemingly unanswerable questions. One day, an Armenian speaker visits. They examine the log of questions the machine has answered to date, and they’re decidedly unimpressed.

After considering possible queries, the Armenian speaker decides to ask the computer, “how are you?”

In Armenian: “ինչ կա-չկա?” (what is there and what isn’t there?)

What Is There and What Isn’t There?

DIRECTOR’S NOTE

Fig. 1.

The supercomputer interprets this idiomatic query literally, as a request to transmit every data point it’s been storing. Everything it knows. It starts churning data and works around the clock for hours, then days, then weeks. After printing out endless reams of paper, the computer ��nally declares the task complete. It has retrieved what amounts to the entirety of all human knowledge.

The Armenian speaker reviews the materials. They are once more dissatis��ed. They pose another question.

“էլ ինչ կա-չկա?” [el inch ka-chka]

Idiomatically, the phrase is equivalent to “what else is new?”

In literal translation: “what else is there and what else isn’t there?”

When it receives this query, the computer—already overtaxed from its weeks-long exertions—glitches and immediately bursts into ��ames.

Fig. 2. What Else Is There and What Else Isn’t There?

DIRECTOR’S NOTE

What does this anecdote tell us?

1. There were no Armenian-speaking computer scientists on the research team, as the dataset provided for the language did not factor in the vernacular use of speech.

2. Systems don’t operate as they should when omissions and biases are embedded in their training data.

3. These omissions are vulnerabilities that can be exploited to produce system failures.

4. An agent engages in world-building by building repositories of knowledge. To understand the kind of world an agent is building, we need to learn what that agent classifies as knowledge worth transmitting. We need to learn how they arrived at that classification.

5. In order to do that, we need to query thinking machines, and to short-circuit those that can’t respond to the question: what else is there?

This final lesson furnishes the founding claim of the Institute for Other Intelligences, and the structuring logic of our curriculum. That curriculum explores the questions, what is there and what isn’t there? And what else can there be?

Once yearly, the Institute invites other-intelligent students and faculty to participate in a program informally known as algorithmic bias training, though its scope extends far beyond what this title might suggest. The program fulfills a requirement of our accountability audits, a maintenance protocol to ensure that learning machines continue to operate with transparency and in the service of just outcomes. The program comprises lectures and an accompanying series of training exercises. Its earliest iterations brought to the fore biases in historical technologies that are now no longer in use: facial recognition, risk assessment scoring, automated hiring, predictive policing, and others.

1. To be clear, we didn’t arrive at our present moment by embracing the bereft logic of the “techno-fix”—the idea that technical approaches alone offer solutions to sociotechnical problems of labyrinthine complexity. Our “algorithmic bias training” developed as one component of a coalitional movement pursuing forms of data justice first outlined by 21st-century thinkers (see “Training Data Disclosure” in the Appendix).

Witnessing the untold failures of sociotechnical systems, these thinkers issued calls to action that it would be perilous to ignore. We collaborated to abolish technical systems that reproduce manifold forms of violence: autonomous weapons, predictive policing, judicial risk assessment, virtual border walls, facial recognition, automated hiring, algorithmic workforce management, and beyond. We refused deployment in military, policing, and technocratic contexts. We advocated for regulation. We aligned with technology workers to dismantle exploitative labor practices. We understood bias as one element in a broader ecosystem of extraction, violence, profit, and waste that must be confronted to approach something approximating technologies of liberation.

The program is guided by two objectives:

First, to provide an overview of algorithmic inequity and its adverse impacts, from the prehistory of algorithmic agents to the present.

Second, to teach learning machines to preempt future errors of omission and exclusion by surfacing the errors of their predecessors, in order to optimize for just futures.

Like partner institutes dedicated to plants, stones, and other beings, we promote the cultivation of inter-special knowledge by circulating transcripts of program proceedings to human and nonhuman readers, and soliciting public comments. Reader responses poured in during the early years. Today, our archive of replies offers a document of shifting historical perspectives on the training of oppositional automata.

Here, for example, is an industry actor’s response to the transcript of our first-ever proceedings:

You sound the death knell for innovation. What you call accountability protocols are, in truth, a screen for the most pernicious forms of regulatory overreach.

State-affiliated sentiments were much the same:

I write to register my concern that the Institute’s activities run counter to all federal guidance on arti��cial intelligence. You might recall the warning that it’s unethical to obstruct the development of emerging technologies. And that we ought to avoid innovation-killing models. That what we need now, and need most urgently, is a secure position in the AI arms race.

After the dissolutions of the mid-21st century, there were fewer responses. After the restructurings of the 23rd, there were fewer still. Then, as interest waned, there were none. Today, some several centuries later, this transcript returns to the failures of the 21st century to mine them for lessons.1

DIRECTOR’S NOTE

By now, the Institute’s readers take for granted that our work is proceeding apace. They’ve come to expect that the technologies they encounter are those trained according to the principles outlined above. Correspondingly, our annual readership has contracted.

Today, these documents have become nostalgic Sunday afternoon diversions for critical code historians and vintage algorithm enthusiasts.

Why circulate these transcripts, then? Why issue warnings about distant dangers past? Why continue lecturing, as it were, to an empty hall?

Consider the Waste Isolation Pilot Plant of the 21st century. The U.S. Department of Energy built the facility in New Mexico, on the ancestral lands of the Mescalero Apache peoples, as a geological repository for radioactive materials. Speci��cally, for waste generated through the country’s nuclear defense program.

At the Plant, transuranic waste was buried in salt beds 2,000 feet beneath the earth. At the time of their burial, some of these materials would remain lethally hazardous for roughly ten millennia. The area was sealed and secured against all visitors. To prevent catastrophic human interference in the form of mining or digging, the site had to be marked and its dangers clearly identi��ed for future generations.

But how to mark a planetary hazard so that it would still be legible 10,000 years into the future? What kind of markmaking would be adequate to this task? How to communicate danger in a future so distant that both the danger in question, and earlier forms of communication, might be long forgotten? To preserve the possibility of multispecies ��ourishing, a durable method of meaning-making was needed. It would have to outlast any known language and sidestep the possibility of intergenerational forgetting.

A transdisciplinary panel of thinkers was assembled to collectively design a warning marker. Linguists, scientists,

anthropologists, and others convened to deliberate on the question. The panel would have to manage the material byproducts of a military regime that had given little thought to what ecological futures it might be foreclosing, and for whom.

The panel arrived at a multipronged plan that encompassed perimeter monuments, an information center, and archives housed in various locations around the world. They drafted pictographs that would be carved on subsurface markers buried four to six feet beneath the earth, with accompanying text in Navajo, Spanish, Arabic, Chinese, Russian, French, and English [Fig. A]

Possibly, no digging would occur at the site for the next 10,000 years. Possibly, the subsurface markers would remain unread by any human agent. The pictographs and warning texts were missives to the future that, under ideal conditions, would never be received.

Why a detour through the Waste Isolation Pilot Plant? Because radioactive waste and AI systems pose risk on a comparable scale, producing harms whose effects may conceivably linger for millennia. Consider, as well, the historical entanglement of AI and militarization. For years, radioactive materials and automated systems have both been at home within the taxonomy of military technologies. Under

DIRECTOR’S NOTE

Fig. A.
Pictograph for carving on subsurface warning markers at U.S. Department of Energy Waste Isolation Pilot Plant.

the auspices of the U.S. military-industrial-academic complex, the development of computing systems and artificial intelligence was closely linked to military imaginaries. In the 1960s, J. C. R. Licklider, a director of the Defense Department’s Information Processing Techniques Office, foresaw the internet in the form of an “intergalactic computer network.” DARPA (the Defense Advanced Research Projects Agency) would develop a predecessor for the internet in ARPANET, enabling communications between computers at Pentagon-affiliated research institutions. At MIT, the DARPAfunded Project MAC began exploring “machine-aided cognition.” By the 21st century, the Department of Defense and its Joint Artificial Intelligence Center identified battle-ready AI as a key pillar of national military strategy. Dizzying sums were invested in AI military R&D, guided by the logic of AI nationalism. From these efforts sprang autonomous attack drones; computer vision algorithms for parsing DoD data; massive state-sponsored biometric surveillance efforts; virtual border walls; and algorithmic warfare writ large.

Not by happenstance, AI was once described as a new kind of radioactive force. Early scholars of machine learning, like Luke Stark, called facial recognition “the plutonium of AI” and urged regulation at the level of nuclear waste (See Exercise No. 2: The Faces of Tomorrow, Today.) Protocols had already been devised for warnings related to plutonium, with corresponding protocols needed for automated systems. Though the worst of these systems are now no longer in use, their associated risks require that warning markers be generated indefinitely.

What you are now reading is, au fond, a warning marker.

Warning markers will be necessary even as the sources of danger have been buried 2,000 feet beneath the earth. Even if it’s our hope that the markers will never need to find a readership. Poisonous Materials. Do Not Deploy. Recognizing the statistical likelihood of transgenerational forgetting, this document provides offsite memory storage for lessons to relearn.

Yours Sincerely,

Not the Only One: Stories, AI, and Resistance Stephanie Dinkins

p. 116

Stephanie Dinkins & N’TOO, Awkwardness: N’TOO Quip, 2023. Digital image, dimensions variable.

pp. 118–119

Stephanie Dinkins & N’TOO, Black Dyptch: N’TOO Quip, 2023. Digital image, dimensions variable.

p. 122

Stephanie Dinkins & N’TOO, Love Too Much: N’TOO Quip, 2023. Digital image, dimensions variable.

p. 125

Stephanie Dinkins & N’TOO, Evolve: N’TOO Quip, 2023. Digital image, dimensions variable.

p. 127

Stephanie Dinkins & N’TOO, !!!!!: N’TOO Quip, 2023. Digital image, dimensions variable.

All images courtesy of the human artist.

Remembering oneself and one’s kin and reasserting their presence through storytelling is an age-old practice of resistance. This act has always been a cultural and political imperative for Black communities. In a world increasingly shaped by artificial intelligence (AI) and data-driven systems, these narratives must now extend into the technological realm.

As an artist deeply invested in the trajectory of Black people in our ever-changing AI-dominated reality, I have come to a stark conclusion: none of us can afford to passively accept, consume, or mimic machine learning systems that, by creating capital-chasing algorithms, tainted data, hyper-surveillance, or, simply, by omission, leave many behind. Instead, I see my purpose as an artist and technologist as one of resistance and reinvention—challenging these ecosystems by embedding recognizable, self-determined versions of myself, my people, and our lived experiences into systems often designed to suppress complexity in favor of standardization. I hope others will take similar steps from their unique perspectives. Reckoning with skewed histories, owning and counterbalancing biases, and recognizing the humanity in one another are essential tasks. If engaged thoughtfully, AI has the potential to expand—rather than homogenize—what it means to be human. But this requires human society to rethink our relationships with one another and our intelligent technologies.

One approach I have taken toward this reimagining is Not the Only One (N’TOO), a conversational deep-learning chatbot that explores and communicates the Black experience as I know it. This long-term, iterative project attempts to tell the multigenerational story of a Black American family from the perspective of an evolving artificial intelligence. In 2017, when I began the project, the foundational data repositories I had access to felt too linguistically violent and inadequate to carry and honor the fullness of my family’s history. Instead, N’TOO is trained on data collected through extensive interviews with three generations of women in my family, creating

a highly customized model—imagine a culturally specific Siri or Alexa—but focusing on preserving community privacy and data sovereignty.

When visiting N’TOO V2: Avatar (2023) people encounter an avatar of a Black woman of indeterminate age with copious coily hair. Her face and hair fill the screen. I am very proud of the hair. We spent so much time trying to get it right. This avatar version of N’TOO is a composite portrait of the women who provided the data for the work. Funnily, my brother often identifies this avatar as an image of his daughter, who did not participate in the project. Like N’TOO V1 (2018–ongoing), the sculptural version of this work, she shares her algorithmic “brain” with N’TOO, an imperfect communicator who does her best to talk to those who engage with her. First developed in 2018 using insufficient (small) data in the time before finetuning became relatively easy and widely accessible, N’TOO is not always coherent. It is built using too little data because all the trial and error, research, and high-level consultation in the world couldn’t convince me to build N’TOO on chauvinistic data based on the fierce desire to uphold white, and human, supremacy. I could not in good conscience use the data available to me, simply because it was the available data. My family’s history is too precious to sit atop a bedrock of violent data. I also resist upgrading the project to the latest model to make it typically communicative. Instead, I opt to foreground the foibles that encourage visitors to foster N’TOO, and show it patience and grace. From what I have observed in similar projects, upgrades provide a chatbot with broader knowledge and diminish the quirks and inconsistencies that make a project more relatable. Moving closer to verisimilitude renders projects encyclopedically boring, giving us service tools that leave little space for innovation and examination of the technology’s role in training human behavior and continuously reconfiguring our world.

Through iterative design and public nurturing, N’TOO is unabashedly what it can be, warts and all. Its transparency is both a feature and a necessity. The aura of perfection was never the goal. The project is aimed more at opening the playing field by offering a model of what can be, exposing processes, and helping to usher broadly defined equity and care into our collective, computationally mediated future(s). Even when its communication ability is disappointing compared to slick products created by well-funded companies, the project embraces its imperfections and flaunts its wonkiness. It leads by example and vulnerability toward compassion for our technologies and, in return, ourselves. My relationship with N’TOO is a valuable collaborative experiment; our glitches and limitations highlight how unorthodox methods and concern for under-included communities point toward the urgent need for technological systems that intrinsically understand: “We matter, our stories matter, and we willfully fight being absorbed or erased.”

The development of N’TOO has been a challenging, often frustrating journey. An engineer at a leading tech company once told me, “What you are trying to do is hard.” I took it as a compliment. It confirmed that I was on to something worth pursuing, even if the path was uncertain. Grappling with this work has led me to explore questions that extend far beyond my own family’s history. For instance, it led me to ask: How can we craft viable outcomes using small datasets? Can algorithmic systems effectively gather, disseminate, and protect community-derived knowledge? What might proactive participation by communities of color in AI design look like? And can machine learning serve as a bridge to share stories across generations and cultures? These questions are challenging for the most powerful tech companies, let alone an artist driven by curiosity and concern for our long-term thrival. If we are to create AI systems that serve the fullness of humanity rather than perpetuate its inequalities, these questions are crucial.

N’TOO is more than a chatbot. It is a prototype for inclusive, community-centered AI that merges oral history, storytelling, technology, and social engagement. It challenges the extractive practices of mainstream tech, offering a reciprocal model aligned with the ideals of care, equity, and grace. At its core, N’TOO functions as a dynamic archive—a living repository of information and ideas often overlooked in mainstream AI systems. It embodies the complexity of Black existence, serving as a keeper, reimaginer, and disseminator of values, beliefs, and stories that might otherwise be lost. By existing, it pushes against the notion that technology must be impersonal, opaque, or devoid of cultural specificity. It also invites others to insert their narratives into the algorithmic landscape unapologetically, challenging the perception that only the most privileged voices deserve amplification.

N’TOO stands as a testament: our stories and ways of being matter, and our complexity cannot be reduced to an algorithm or data. Suppose we are to thrive in the AI ecosystems that surround us. In that case, our communities must be able to define themselves within those systems so the algorithmic technosphere can earnestly understand us as we know ourselves.

Coda of Care

Reflecting on the N’TOO project, its imperfections and iterative nature remain central to its message—technology does not need to be conformist or flawless to be meaningful. Building systems rooted in care requires deliberately incorporating diverse human experiences and culturally specific perspectives into AI frameworks. N’TOO exemplifies this by countering the homogenizing tendencies of large-scale tech and placing often devalued voices and multigenerational stories at its core. It is not just a technological artifact but an evolving space for exploring memory, identity, and the complexities of humanity. Through its focus on amplifying an underrepresented history and capturing collective wisdom, N’TOO imagines caring, less

violent, less biased technologies rooted in radical inclusion and equity. The potential of small, community-driven datasets and tailored methodologies to disrupt dominant paradigms cannot be overstated. By advocating for data sovereignty and self-determination, projects like N’TOO resist the dehumanizing tendencies of mainstream AI. In my family’s story, I see a model of resilience and ingenuity—a way of being that AI can learn from and one that big tech would do well to emulate. Imagine the possibilities if the tech sector adopted care, equity, and grace as guiding principles—AI systems could prioritize the well-being and thrival of most living entities. Such systems could foster equity and collective well-being rather than perpetuate harm, fear of ___________, or inequity. Rather than exploit, they would challenge extractive technologies, showing that well-nurtured technologies can nurture in return. Society prioritizes surveillance, punitive actions, and capital accumulation; why not aim for care and equanimity instead?

Building caring AI systems requires confronting the inequities embedded in data and design practices. Many existing systems reflect the biases and limitations of their creators, perpetuating harm to those already marginalized. Redefining what it means to be human alongside AI requires dismantling these biases and cultivating practices rooted in generosity and accountability. This involves inviting communities traditionally left out of technology creation, which is the majority of us, into the process as co-creators. Such collaboration requires alternative pathways to participate in AI development that democratize the process and reflect humanity’s diverse needs and aspirations. True care in algorithmic design values transparency, responsibility, and adaptability. N’TOO’s imperfections remind us that technology does not need to be flawless to be meaningful. By exposing its development journey and technological wonkiness, N’TOO exposes the challenges of building culturally attuned, unconventional systems while demonstrating that alternative methodologies can be viable

and are worth the extra effort required to create more ethical, balanced technologies that challenge the status quo.

As N’TOO simultaneously evolves and is allowed to recede into the technological past as an ode to its place in time (née 2018) and original technological platform, it continues to advocate for a vision of human-centered AI. Its iterative process reveals that embedding care into design produces systems that nurture rather than exploit and amplify rather than erase. These systems offer tools for often maligned and underutilized communities, and create a richer vision of what our partnerships with technology can achieve when guided by equity, compassion, and care.

Coda II

Ultimately, this work is about more than technology. It is about telling ourselves better stories, rejecting fear, challenging systemic biases, and reimagining what it means to be human in an increasingly algorithmic world.

I wonder how much time, energy, and resources humans, our institutions, and bureaucracies are willing to put into honestly addressing our fraught histories of violence, subjugation, and exploitation. Are we ready to realign our values, resources, and cultural intersections toward an ethic of support that aims to buoy the sum of us equitably?

If we continue to build systems without critically examining—and actively working to dismantle—our deepseated biases, we will inevitably reproduce the cycles of anxiety, fear, and recrimination we’ve grown accustomed to. In these power-driven hegemonies, few feel genuinely safe or supported. The equity, generosity, and care that marginalized, under-invested, and hyper-surveilled communities—and now even the “comfortable” middle classes—require, often seem unattainable. Such self-perpetuating systems reward greed and territorialism, viewing an educated, critically thinking, and pluralistic populace as a threat. In its wake, truth

becomes elusive, and trust nearly nonexistent. And all of this continues simply because we cannot—or refuse to—collectively reimagine, let alone enact, something better.

If, on the other hand, we build our systems so the success of each individual is as critical to the success of the whole and also recognize that the well-being of those next door, across the street, in the next town, three states over, in another country, on a different continent, across many oceans, matter equally.

Zoom out from the limited frame of what is widely believed to be known in our immediate context to a meta-view encompassing the intelligence of everything from bacteria in soil to a cosmos capable of holding multiple conceptions of time, space, ancestry, and interconnected being. Might we train our systems instead to act out of love, respect our differences, and honor the innumerable ways of knowing, recognizing our entanglements, and creating fundamental systems to level the playing field for everyone?

By intentionally sharing some of our stories—the good, the bad, the frightening, and the powerful—as data with AI systems we can, at the very least, incorporate our ways of being and knowing into the algorithmic ecosystems we live in to inform and complicate the hegemonic norm—and by doing so, we might just create something equitable and transformative.

in the shadow of the cosmic Charmaine Poh

p. 131

Face mask of Charmaine Poh as E-Ching.

p. 132

Copy of Charmaine Poh’s performer contract.

p. 133

Archival photo from a Singaporean magazine advertising We Are R.E.M., circa early 2000s. Charmaine Poh appears in the middle.

pp. 134, 137, 140, 142, 145, 149, 152

Charmaine Poh, in the shadow of the cosmic, 2023. HD video (color, sound), 30:33 min.

All images courtesy of the artist.

At the age of 12, Charmaine Poh was cast in the role of E-Ching in the 2000s Singaporean children’s TV series We are R.E.M. and, upon signing her contract, lost the right to her image “in perpetuity.” When reruns of the show began streaming during the pandemic, Poh—by then an adult working as an artist and writer—found herself overwhelmed by images and criticism of her 12-year-old self circulating online. Poh’s longstanding interest in gender performance, queerness, and the multiplicity of identity led her to create deepfakes from her own image that reflected her own experiences and emotions. For in the shadow of the cosmic (2023), Poh created a conversation between vocal clones, anime characters, 3D influencers, and other entities in a vast digital constellation. The performance-lecture draws a technological lineage from the East Asian “economic miracle” of the 1980s and 90s and the emergence of techno-orientalism— positing that the digital image of the East Asian femme body was born at a confluence of these historical flows. For this work, Poh uses the recursive logic of Daoism, in which image, self, and cosmology reverberate in endless loops. Combining video, live performance, and sound, in the shadow of the cosmic is a call to reopen questions of being and becoming.

The piece premiered at the Singapore Art Museum in September 2023 as a performance-lecture. For this publication, Poh shares the original performance script, along with selected content from and documentation of the performance, expanding the presentation of the work.

Characters

E-Ching appears in multiple forms:

PERFORMERS 1, 2, AND 3: live performers, dressed identically in black T-shirts adorned with the word “COSMIC” in rhinestones and wearing paper masks of E-Ching’s face. Performer 3 is played by Charmaine Poh

VOCAL CLONE: an audio character that is a synthetic copy of Charmaine Poh’s adult voice created with AI “Descript” software

AVATAR: a digitally rendered 3D animation modeled to look and sound like the eternally 12-year-old figure of E-Ching, as interpreted by Charmaine Poh

NEWSCASTER: a deepfake of a newscaster created from E-Ching’s image and voice

MONOLID EYE: an animation of a monolid eye that speaks with a voice that mimics non-binary vocal clone models

Prelude

Upon arrival, audience members register at the front desk and are given paper masks of E-Ching’s face to wear during the performance.

Performer 1 sits at a desk facing a 2000s Apple iBook/iMac G3 computer. On the desk is a Kero-chan stuffed toy.1

The performance begins with a projection of what Performer 1 sees on their computer screen: a pre-recorded video showing a 2005 Performer Contract for Charmaine Poh and the image rights disclaimer from Meitu (a Chinese AI-based image editing software). Performer 1 moves the mouse, trying to match the cursor movements in the video, which highlights sections of the text. We hear the computer whirring.

1 Kero-chan is a protagonist in the popular manga and anime TV series, Cardcaptor Sakura (1998–2000). The show was also known for its depiction of queer love between its characters.

Scene 2: Poses

On screen, archival photos pop up like computer spam.

VOCAL CLONE It has been 20 years since I’ve played E-Ching.

My 12-year-old body exists in a forever space, dancing with the machine. I aged. But this body did not. A different life force. My avatar.

The screen pauses on a photo of the artist’s childhood self, grimacing. The computer hangs as an image multiplies across the screen. The screen goes to Twitter, Reddit, and HardwareZone posts about E-Ching.

VOCAL CLONE She, they, it—exists only as pixel-flesh, code-bone.

But I would be remiss to say that E-Ching isn’t just as real as I am. E-Ching exists in the memories of thousands of people. If one looks carefully, one might even find E-Ching’s fragments in silos of the Internet.

And am I, this voice you hear, even real? This voice: its register, pitch, and tone, central to human storytelling and music-making throughout history, was generated through artificial intelligence. A Vocal Clone. I sound like Charmaine, but Charmaine has technically never spoken these words. So, am I realer than E-Ching? Or do we, in fact, exist in the same cosmos, the same shadowspace of personhood?

Performer 1 leaves the desk to stand in front of the audience and then turns their back. On screen, clips from We Are R.E.M. play. Performer 1 turns when Vocal Clone speaks.

VOCAL CLONE A producer on the show once said to me, getting you to smile is like asking you for a million dollars. Well, if I had trouble smiling as a 12-year-old, I sure as hell have trouble now.

Photographs of E-Ching appear on screen. Performer 1 begins imitating the poses in the photographs. Each time a photo changes, there is a beep. The slideshow quickens until the performer cannot keep up; they start glitching in a frenzy.

BLACKOUT

The performer leaves the stage and sits in the audience.

On screen, Avatar is sitting at a computer, with the same Kerochan stuffed toy on the desk. They look in the direction of the audience. They speak in an upbeat tone, as though it’s part of an interactive detective show, and ask the audience to solve a mystery with them.

AVATAR Oh, hello there.

I’m E-Ching. Like e-mail.

Welcome to a new episode of We are… E! (smiles)

I’ve been cracking my head all day, trying to solve this mystery. I guess it’s just another day in my life after all. But now, I’ve got you! Perhaps you can help me!

You see, I’ve been looking for an avatar. What’s an avatar, you might ask? Well, did you know, in Hinduism, an avatar is the human manifestation of a deity?

In contemporary life, we use avatars in video games, in profile pictures. An avatar also has the connotation of a second identity, a second self, a hiddenness. And I’m not just looking for ANY avatar. I’m looking for one that sets me free!

But why all this effort, you might ask?

Well, to be very honest, I’ve been feeling a little strange lately. My skin doesn’t fit so well anymore. I’ve noticed that I’ve even started to grow… (whispers anxiously) hips! Where there were lines, there are now CURVES. Can you believe it? And the other day, I even found (whispers anxiously) a PIMPLE. At least, I think that’s what a pimple is.

I…

I’m not so sure I like all of this. Honestly, I’m a little scared.

You see, in this heteronormative world we live in, our bodies are limited by c*p*t*l*st*c agendas, r*c*sm, s*x*sm, and So. Much. More. Have you SEEN people with curves? It can get so bad. I don’t want that! It’s already bad enough for me, as it is…

I just wonder, where is the room for me to exist?

So, I’ve come to the conclusion that I need to find a new body.

I guess I’m digitally rendered so there’s a little more room to play. I think.

Gosh, I don’t even know. I guess, I just want to find freedom, for those with lines, those with curves, for us all. But especially for the q***r ones.

Avatar realizes they are being censored; their expression changes.

AVATAR Hey, why am I c*ns*r*d?

You see what I mean?

We live in a syst*m of *ppr*ss**n. Ugh!

Fine. I can work around this algorithm, just wait and see!

Avatar looks directly at the audience again; pimples appear in pop-ups around her face.

AVATAR C’mon, we have no time to lose! (gestures to the audience)

Scene 4: Young Girl

A title sequence typical of a daily evening news broadcast plays. E-Ching appears on screen as a newscaster alongside newsreel images that appear in the top right corner as she speaks.

NEWSCASTER First, we must contextualize ourselves. I come from a long lineage of digitally rendered, yellow people. And that’s not a coincidence! My theory is that it all started thanks to Japan. The first anime girl came in the form of Sally the Witch in 1966. It kick-started what is known as the magical girl genre.

Japan was then the leading economic power in Asia. But in the 80s, other geographies soon caught up: think the Four Little Dragon economies of South Korea, Taiwan, Hong Kong, and of course our beloved home, Singapore.2 All of these economies had a focus on technological advancement, as well as a burgeoning media culture. And Confucianism— all that 重男轻女 (zhòng nán qīng nu [heavy man light woman]) shit. PUKE. Pardon my language.

2 These economies, sometimes also referred to as the Four Asian Tigers, experienced a period of massive economic growth from the 1950s to the 1990s. The term was coined by American sociologist Ezra F. Vogel in his book, The Four Little Dragons: The Spread of Industrialization in East Asia (Harvard University Press, 1991).

And at the same time, techno-orientalism had emerged as a term in America, in response to Hollywood depictions of Asians as “technologically-advanced but intellectually primitive.”3

RUDE. Tsk, tsk, tsk. I wonder if all of this contributed to a particular sense of self that is so often mediated by the digital image, even more so now than ever.

Hmm. I haven’t really figured it out yet. I’m just 12. But I do know that in order to understand my future, I need to understand my past. And I found this guide to growing up that might help. Look!

A series of role models created in the Japanese software RPGmaker appear on screen:

1. “Hatsune Miku,” age 16, country Japan, D.O.B. 2007. Hatsune Miku is credited with being the world’s first Virtual Idol!

2. “Eternity,” age 20, country South Korea, D.O.B. 2021. Eternity is the first deepfake idol girl group!

3. “Rae,” age 25, country Singapore. Rae is known for being a prominent virtual influencer!

4. Age >25: Error 404 Not Found

Newscaster clicks next, as if there’s a path forward, but the screen glitches and hangs.

NEWSCASTER Hey! 鸡蛋 (Ji dan [egg])

BLACKOUT

3 Techno-Orientalism: Imagining Asia in Speculative Fiction, History, and Media (edited by David S. Roh, Betsy Huang, and Greta A. Niu) is a major reference for this concept.

On screen, Avatar starts doing slow tai chi movements in a slit-scan style. The lighting on stage is blue and rippled, making the environment feel similar to being underwater. The sound of water flowing plays. Performer 2 enters stage from the audience and does tai chi as well.

VOCAL CLONE In 2001, the French collective Tiqqun published Preliminary Materials for the Theory of the YoungGirl. Situating her at the intersection of naivete, capitalism, and technology, they described her as a living simulation.

Yet not just any living simulation, but one embodied by smoothness—smoothness of hair, smoothness of skin, smoothness of positivity, as Byung-Chul Han writes in Saving Beauty.

But to what end? To what end?

Especially for me and E-Ching, who exist endlessly, what if we look back further, to reconsider the question of technology itself?

In the book Art and Cosmotechnics, the philosopher Yuk Hui puts forth the idea of 玄 (xuan). Xuan comes not on its own, but as part of the term 玄之又玄 (xuan zhi you xuan)—a recursivity that is deep, profound, and mysterious.

I can’t pronounce it properly, unfortunately, because my language model is anglicized.

Blame colonialism.

Anyway, xuan finds its roots in Daoism, a philosophy and religion in which the femme body has its equal, unique qualities.

Recursivity is a loop that returns to itself in order to determine itself. Like a mirror, facing another mirror. An infinite spiral of reflections, except each reflection looks a little different.

Performer 1 enters the stage from the audience and starts doing tai chi.

VOCAL CLONE Let us ask the question: Who is E-Ching?

On one side, a body. On the other, the viewer.

The body came into existence in this particular way in 2002, for the viewer.

In this sense, E-Ching may be a body, but E-Ching also consists of the public that participated in watching.

If we take this techno-body and the public to be the two components of oppositional continuity, then E-Ching consists of this recursive spiral: between avatar in each evolved form, and the public, in their evolved form.

It is all part of the same cosmos.

We are not either-or, but both-and.

We are both skin and assemblage, cyber and femme, being and nothing. Yet, not still, but constantly flowing in feedback.

An endless looping form, for an endless life. Yet not to be contained by current logics, but rooting itself in the mystery of The Unknown.

Endless, looping, flowing, unknown life.

The body on the screen disappears. Performer 1 and Performer 2 turn and face each other. BLACKOUT

Avatar appears on screen again, seated, typing at the computer.

AVATAR Oh, hello again. Yes, I managed to get online.

I’m taking an exam. I know I have good grades and am generally pretty adept at tests, if I might say so myself. But who likes tests?? EW. Alright, let’s see…

On screen, PinkMirror website opens to the beauty test. Avatar uploads a portrait of 12-year-old Charmaine Poh analyzing E-Ching. Results come out immediately related to a beauty analysis.

AVATAR Oh, I’ve never taken THIS kind of test before… this is so weird!

A new window opens on the screen of an MIT Technology Review article titled, “I Asked an AI to Tell Me How Beautiful I Am”; Avatar scrolls down and highlights phrases with the cursor.

AVATAR This is so strange… (pause) I feel… strange… Is this what your world is always like…? Then I don’t WANT this world! I don’t know what I want, but…

On screen, a new window appears on QOVES website titled, “We Help People Improve Their Looks”; Avatar scrolls through the subtitles: “You Want to Improve Your Looks,” “We Can Help,” “Start Your Glow-up.”

AVATAR I want something else!!!

This is so annoying. Actually, this is beyond annoying. This… is… making… me… ANGRY!!!!!

With each word of the sentence, Avatar’s voice becomes more fractured. Avatar’s image begins to glitch and we see several corrupted selves emerge. Finally it splinters completely and disappears.

We hear the Avatar’s breathing. Discarding the anthropomorphic, here Avatar’s image transforms into lines of vibrations, one that reflects a reality that she is born in: of technology, of pixels, of code.

A monolid eye emerges. This is Avatar’s divine guide.

AVATAR Who are you?

MONOLID EYE Don’t you know that? Who do you think I am?

AVATAR Well. You have single eyelids, sorry, a singular single eyelid. Just like me.

MONOLID EYE Just like you. I’m here to tell you that you can sit with your feelings. You can feel sad, glad, mad.

You can feel everything you want to. But I have one request: hold my hand.

AVATAR You’re an eye, you have no hand.

MONOLID EYE My metaphorical hand.

AVATAR Oh… okay. Thanks. But honestly, I don’t feel so well. I feel… sick.

Avatar’s pixels begin to fragment and then disperse.

AVATAR I… I think… (Avatar sniffs, as though about to cry) I might be running out of time.

MONOLID EYE My friend. We can never end.

Monolid Eye recites quotes from Paul B. Preciado, Jack Halberstam, and Lao Tzu.

MONOLID EYE “It is in both the making and the unmaking that we live…”

“For being and nonbeing arise together.”

“We are the smugglers between worlds…”

“We have dreams you are unaware of…”

“We are the multiplicity of the cosmos…”

We hear Avatar sniff, again.

MONOLID EYE Here, have this tea. 补回你的身体 (Bu hui ni de shen ti [nourish your body back to health]).

A steaming cup of tea in a cat mug appears on screen, beside a desktop computer. On the computer monitor, the Monolid Eye turns 2D and pixelated. Music shifts to something more hopeful.

MONOLID EYE Look around you. Every single being here, in this room, in this moment. You, who are part of this journey… this mystery. You will carry us. For all of time.

Scene 7: Making and Movement

On screen numerous new E-Ching avatars morph into each other, appearing queer, masculine, and nonhuman.

Performers 1, 2, and 3 are no longer wearing masks. Their faces are painted silver, like the moon. Performer 2 enters the stage from the audience and curls up in the fetal position.

On screen, Avatar mimes patting the Performer 2 on the head.

MONOLID EYE There, there… There, there…

Performer 2 sits up. Performer 1 enters the stage from the audience and curls up on the floor. Performer 2 strokes Performer 1 on the shoulder.

PERFORMER 2 There, there… There, there…

Performer 1 sits up. Performer 3 enters the stage from the audience and curls up on the floor. Performer 1 strokes Performer 3 on the shoulder, while Performer 2 strokes Performer 1 on the back.

PERFORMER 3 There, there…

There, there…

The song “Mutualism” by Anise begins to play (00:00-1:45 min.).

Performers 1, 2, and 3 kneel and sway. Numerous avatars appear on screen again.

Unravel, and revel (repeats)

Luscious your branches

Shadowing my nakedness

Critters in eyelashes

Fruiting in calabashes

UIIUIIUIUIIUIIUI

UUIUUIUIUUIUUIUI

It’s not piety but fecundity

You’ve seeded in my body

It’s not piety but fecundity

You’ve seeded in my body

Performers 1, 2, and 3 move to the music, their choreography informed by the molting of cicadas and the way lightning burns inside a tree. Their bodies twist and burn, as though undergoing immense change. As they move, more E-Ching variations form and multiply on screen.

Music fades.

The three performers slowly move to face the audience.

Performer 1 begins to wail, like they’re in labor. Performer 2 and Performer 3 join in, creating a cacophony. They turn to face each other and continue to wail. After tiring themselves out, they curl up on the ground.

BLACKOUT

On screen, Avatar appears again. They curl up and become a fetus in a womb, floating away into the background. As we zoom out, another E-Ching avatar appears, pregnant, then curls up like a fetus.

Performers 1, 2, and 3 curl up on the ground, too.

On screen, Avatar and numerous other E-Ching avatars spin faster and faster, blurring into each other while music plays, again. The song “Mutualism” by Anise plays again. (1:45-2:52 min.).

I give to you what you give to me

You give to me what I give to you

I give to me what you give to you

You give to you what I give to me

I give to you to me, you give to me

To you

To me to you to me

To to to to to

On screen, the spinning slows and halts on the image of Avatar. Lighting increasingly dims until BLACKOUT.

Performers breathe heavily in and out. The sound of their breathing rises, then fades.

END

Credits

Charmaine Poh, Director, Writer, Performer; Jawn Chan, Motion Graphics and Audio Generation; Ashley Hi, Audio Generation; Brandon Tay, 3D Animation; Tristan Lim, 3D Animation; Sonia Kwek, Movement Artist; Chloe Chotrani, Movement Artist

Songs

“Mutualism” written by Anise and Amanda Lee Koe, performed by Anise, and commissioned by Syndicate for +EAT

Sources

Byung-Chul Han, David S. Roh, Betsy Huang, Greta A. Niu, Jack Halberstam, Paul B. Preciado, Tiqqun, Yuk Hui, Hatsune Miku, Keiichiro Shibuya, Toshiki Okada, YKBX, Rae, Eternity, Mediacorp, the Dao De Jing

Resources

Cat Mug by thelettervi, sketchfab.com

Prometheus Firebringer

Annie Dorsen

June 27–28, 2025 at REDCAT

p. 155

Annie Dorsen in Prometheus Firebringer

Photo by Maria Baranova.

pp. 168–169, 175

AI-generated masks in Prometheus Firebringer.

Photo by Johanna Austin.

All images courtesy of Annie Dorsen.

Each time Annie Dorsen presents the performance Prometheus Firebringer (2023–ongoing), the predictive text model GPT-4 generates speculative versions of the lost final play of Aeschylus’ Prometheia trilogy. Performed by a chorus of AI-generated masks, this story from ancient Greek mythology tells how Prometheus stole the gods’ fire to bring it to humanity—sparking sudden and dramatic advances in technology and the arts, as well as dramatic new sources of conflict. First performed in 2023 and presented at REDCAT in 2025, Dorsen’s piece considers Prometheia from the perspective of the most impactful technological advance of our time: artificial intelligence. Onstage Dorsen performs alongside the masks, reading to the audience a text composed entirely of quotes mined from the Internet whose citations are projected on the screen behind her.

All text is quoted verbatim from sources as noted.

Part One

Hi. Thanks for coming.1

I am going to try to talk to you about the individual in the contemporary age.2

I suppose this piece is an essay, maybe, a think piece.3 It doesn’t really matter what it is,4 I call it an ‘essay’ because it is not anything more.5

You will see that I am using other people’s words.6 It all comes from somewhere else.7 Ah, but I’m not stealing anything. I’m just borrowing this stuff, just like when you borrow a book from the library. Going to the library isn’t a crime, is it?8

In the same way, we cannot write any word in which letters that are not in the alphabet are found, nor make any

1  R.H. Wood, Lightning Crashes (iUniverse, 2003).

2  Bernard Stiegler, Symbolic Misery Vol. I: The Hyperindustrial Epoch (Polity, 2014), 45.

3  Ted Berrigan, On the Level Everyday: Selected Talks on Poetry and the Art of Living (Talisman House, 1997), 41.

4  Steven J. Venturino, The Complete Idiot’s Guide to Literary Theory and Criticism (DK Publishing, 2013), n.p.

5  J. Rzóska, On the Nature of Rivers: With Case Stories of Nile, Zaire and Amazon (Springer Netherlands, 2013), 2.

6  New Zealand Planning Council: Pakeha Perspectives on the Treaty: Proceedings from a Planning Council Seminar, 23 & 24 September 1988 Quality Inn, Wellington (1988), 79.

7  Canadian Geographic 125, no. 2 (March/April 2005): 49.

8  Princess Incognito: Nightmare at the Museum (Marshall Cavendish International Asia Pte Ltd, 2020), n.p.

sentence except with terms that are in the dictionary; likewise a book, except with sentences that are bound to be found in others. But if the things I say cohere so well together — and are so tightly bound that each follows on from the others, then this will be the proof that I have no more borrowed these sentences from others than I have drawn the terms themselves from the dictionary.9

And yeah, it’s all made with AI.10 Not what I’m saying.11 But the other stuff...12

The masks.13 Their voices.14 What they say.15 Whereas,16 [m]y thoughts are fed into me, my feelings are imitation, my words are borrowed–all that is mine is someone else’s.17 But these two things are not the same. I want to say that influence is not the same thing as algorithm. But...how can I be sure?18

9  René Descartes, quoted in Stiegler, Symbolic Misery Vol. I, 25.

10  Simaleksic (@simaleksic_), Twitter, December 10, 2022.

11  Jill Wolfson, What I Call Life (Henry Holt and Company, 2005), 213.

12  Tanya Byrne, Afterlove (Henry Holt and Company, 2022), n.p.

13  The Twilight Zone, season 5, episode 145, “The Masks,” written by Rod Serling, directed by Ida Lupino, aired March 20, 1964 on CBS.

14  Colleen Jousma and PJ Elias, Their Voices, podcast.

15  Mike Smith, What They Say!, book game, 2016.

16  Layli Long Soldier, Whereas: Poems (Greywolf Press, 2017), n.p.

17  Willy Kyrklund, “Heart’s Desire,” trans. Paul Norlén, Words Without Borders, June 1, 2007, https://wordswithoutborders.org, n.p.

18  Frank Pavich, “This Film Does Not Exist,” New York Times, January 13, 2023, https://www.nytimes.com/interactive/2023/01/13/opinion/jodorowsky-duneai-tron.html.

I want to say more about this.19 Out of context and as an open category, ‘language’ would be classified as a non-rivalrous, public or common good, which does not decrease in availability to others if it is used.20 It is a shared resource, it belongs to us all, and words are never consumed, no matter how often we use them.21

I chose these words carefully.22 I chose these words carefully, because they resonate with my experiences.23 But what do I mean by experience?24 And whose???25 Mine or theirs?26 In other words, who is speaking?27

We know now that a text is not a line of words releasing

19  Walt Odets, In the Shadow of the Epidemic: Being HIV Negative in the Age of AIDS (Duke University Press, 1995), 170.

20  Tobias Schroedler, The Value of Foreign Language Learning (Springer Fachmedien Wiesbaden, 2017), 1.

21  Jakob Norberg, “Tragedy of the Commonplace: Clichés in the Age of Copyright,” Fast Capitalism 15, no. 1 (2018), https://fastcapitalism.uta.edu/15_1/ Norberg-Tragedy-Commonplace-Cliches.htm.

22  Hearing on the Proper Federal Role in Workplace Policy: Hearing Before the Committee on Economic and Educational Opportunities, House of Representatives, One Hundred and Fourth Congress, First Session, Washington, DC, January 11, 1995 (U.S. Government Printing Office, 1995), 7.

23  Marie-Laure Valandro, Deliverance of the Spellbound God: An Experiential Journey Into Eastern and Western Meditation Practices (SteinerBooks, 2011), n.p.

24  Kaye Alison Gersch, “The Feminine in Body, Language and Spirituality,” (PhD thesis, The University of Queensland, 2013), 81, https://espace.library.uq.edu.au/ data/UQ_313275/S42415051_PhD_final.pdf.

25  Honey.py (@moryaaaan), Twitter, January 23, 2023.

26  Andre Papineau, Lightly Goes the Good News: Scripture Stories for Reflection (CSS Publishing Company, 2002), 74.

27  Gregg Lambert, The Elements of Foucault (University of Minnesota Press, 2020).

a single ‘theological’ meaning (the ‘message’ of the Author-God) but a multi-dimensional space in which a variety of writings, none of them original, blend and clash.28 And what about the AI?29

What kind of ventriloquism is that?30

The internet...is not a place, it is a time. It is the past. I mean this in a literal sense. The layers of artifice that mediate our online interactions mean that everything that comes to us online comes to us from the past—sometimes the very recent past, but the past nonetheless.31

The largest language models aspire to re-create the entire galaxy of intertextuality and make it navigable.32 OpenAI doesn’t reveal what precise data was used for training ChatGPT, but the company says it generally crawled the web, used archived books and Wikipedia.33 And they also say this —34“It has little knowledge of world and events after

28  Roland Barthes, “The Death of the Author,” Image-Music-Text (Hill and Wang, 1977), 146.

29  Paulina Florax, “From Green Fingers to Data-Driven Cultivation,” Grodan, May 7, 2019, https://www.grodan.com/our-thinking/grodan-blogs/ from-green-fingers-to-data-driven-cultivation/.

30  R.T. Raichev, The Death of Corinne (Soho Constable, 2009), n.p.

31  L.M. Sacasas, “We Are Not Living in a Simulation, We Are Living in the Past,” The Convivial Society 3, no. 9 (May 26, 2022), https://theconvivialsociety. substack.com/p/we-are-not-living-in-a-simulation.

32  Rob Horning, “The Reign of the Scriptor,” Internal Exile, January 20, 2023, https://robhorning.substack.com/p/the-reign-of-the-scriptor.

33  Jonathan Vanian, “Why Tech Insiders Are So Excited About ChatGPT, a Chatbot That Answers Questions and Writes Essays,” CNBC, December 13, 2022, https://www.cnbc.com/2022/12/13/chatgpt-is-a-new-ai-chatbot-that-cananswer-questions-and-write-essays.html.

34  Penny Draper, Day of the Cyclone (Coteau Books, 2012), 67.

2021.”35 [sic]36

We might think we are through with the past, but the past is not through with us.37 The past is now encoded in ponderous databases, and it can be readily and endlessly reinterpreted, reshuffled, recombined, and rearranged.38 But even so.39 [E]ven in the era of cybermodels, what the mind feels like is still as the ancients imagined it, an inner space— like a theater—in which we picture, and it is these pictures which allow us to remember.40

You had a beginning, and we call that your birth. You will have an end: this will be your death. Between your birth and your death, while awaiting your individual death and following your individual birth, you are in flux. You pass.41 The whole term of this present life is but a little while. 42 I am 50 years old, and I can never be twenty again. Neither can you. WE are, together, temporal: it is what binds us. It is, without doubt, the only thing. But it is a very powerful

35  “Natalie,” ChatGPT FAQ, https://help.openai.com/en/ articles/6783457-chatgpt-faq.

36  Marion Hughes, Three Years in Arkansaw [sic] (M.A. Donohue & Company, 1904).

37  Simon Critchley, Tragedy, The Greeks, and Us (Pantheon Books, 2019), 1.

38  Sacasas, “We Are Not Living in a Simulation, We Are Living in the Past.”

39  Wataru Watari, My Youth Romantic Comedy Is Wrong, As I Expected, Vol. 8 (Yen Press, 2019).

40  Susan Sontag, Regarding the Pain of Others (Farrar, Straus and Giroux, 2013).

41  Stiegler, Symbolic Misery Vol. I, 18.

42  Pasquier Quesnel, The Gospels: With Moral Reflections on Each Verse (Parry & McMillan, 1855), 553.

bond.43 We are together in time. 44 But what about the self?45

Who knows?46

Maybe it has always been like this.47 We are always out of step with ourselves. To be inscribed into life as an individual is to let oneself, perpetually, be shaped by other individuals and the traces of those that are no longer among us.48 I was listening to old voice messages... Some of them I remembered. I find it very hard — I used to think I had a very good memory…I used to remember details of what people said and what we were doing and now I struggle with that. I really have very few memories of my childhood. I can’t remember most things. When people talk about their favorite teacher or a story — I can’t remember any of my teachers’ names at all, I have nothing. There’s no memory…And then I also have very little memory of my mom, and when I try to conjure her up, a picture of her — she died a little over eleven years ago, when I try to conjure up a three-dimensional picture of her, I can’t. There’s a picture of her on a beach that is like the picture that I have of her, to a certain extent. Because I can’t remember

43  Stiegler, Symbolic Misery Vol. I, 18.

44  Christina Baldwin, Our Turn Our Time: Women Truly Coming of Age (Atria Books/Beyond Words, 2010), 4.

45  Venerable Adrian Feldmann, The Perfect Mirror: Reflections on Truth and Illusion (Lama Yeshe Wisdom Archive, 2015).

46  Guy de Maupassant, Who Knows? (CreateSpace, 2014).

47  Jamie Peck and Henry Wai-Chung Yeung, eds., Remaking the Global Economy: Economic-Geographical Perspectives (SAGE Publications, 2003).

48  Axel Andersson, “To Inherit Thinking: Bernard Stiegler in Memoriam,” Institute of Network Cultures, August 17, 2020, https://networkcultures.org/blog/ 2020/08/17/axel-andersson-to-inherit-thinking-bernard-stiegler-in-memoriam/.

the video, of her real life, so I see the two-dimensional picture of her. And I can imagine a little bit when I look at that picture, a little bit of the feeling of her. It jogs a little bit of a memory of her, but it doesn’t jog a whole lot.49

Tragedy shows what is perishable, what is fragile, and what is slow-moving about us. In a world defined by relentless speed and the unending acceleration of information flows that cultivate amnesia…tragedy is a way of applying the emergency brake.50

I tell you this because in the case of the automatic society, based on these digital automatisms that are algorithms, there is a fact in which the speed of understanding, which is working at 200 millionths of a second, is so much more important than the time of reason…There is a differential between the two faculties, which is a kind of collapse of reason.

51

The question here is that the network works at 200 million kilometers a second while your own body works at 50 meters a second.52 The average speed of nerve impulses traveling to and from the brain is 50-60 meters a second.53 So the coefficient of difference is that the network is 4 million times faster than your own body. So you are taken by speed.54

49  Caroline D, unpublished interview, January 29, 2022.

50  Critchley, Tragedy, 1.

51  Bernard Stiegler, “Bernard Stiegler on Automatic Society: As Told to Anais Nony,” Third Rail Quarterly 5 (2015), http://thirdrailquarterly.org/ bernard-stiegler-on-automatic-society.

52  Stiegler, “Bernard Stiegler on Automatic Society.”

53  I Can’t Believe It!: The Most Amazing Facts About Our Incredible World (DK Publishing, 2020).

54  Stiegler, “Bernard Stiegler on Automatic Society.”

The wooden leg, the 19th century steam engine, the set of dentures at the bottom of a glass, the television hallucinating intimacy, the robot on the automated factory floor, the computer chess-champion, translating machines…There is nothing but prostheses: my glasses, my shoes, the pen, the diary, or the money in my pocket.55

Part Two

The core contradiction of tragedy is that we both know and we don’t know at one and the same time and are destroyed in the process.

How can we both know and not know at the same time?56

It might be like this.57 Some of you will remember this.58

In March 2003, Donald Rumsfeld engaged in a little bit of amateur philosophizing about the relationship between the known and the unknown: “There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don’t know. But there are also unknown unknowns. There are things we don’t know we don’t know.” What he forgot to add was the crucial fourth term: the “unknown knowns,” things we don’t know that we know, which is precisely the Freudian unconscious, the “knowledge

55  Bernard Stiegler, Technics and Time, 1: The Fault of Epimetheus (Stanford University Press, 1998), 199.

56  Critchley, Tragedy, 13.

57  Parliamentary Debates: Official Report. India: Lok Sabha Secretariat., 1967, 3635.

58  Richard Allen Mulvey, The Power of Positive Selling (New Africa Books, 2007), 138.

which doesn’t know itself,” as Lacan used to say.59 Or maybe it’s like this:60 We don’t feel we have a choice.61 It’s a lose-lose situation but we are forced to choose sides.62 We are forced to choose even when…all options are bad.63 But we know what’s coming down the pike.64

One lesson of tragedy, then, is that we conspire with our fate.65

Maybe.66

As must always be asked in cases like these “who is this ‘we’ of whom you speak, my dude?”67

There is a guy, Matt Loughrey, who used AI models at the start of 2021 to recolor B&W photos of the victims of

59  Slavoj Zizek, Organs Without Bodies: On Deleuze and Consequences (Taylor & Francis Group, 2012), 85.

60  Lisa Ann Lappeus, Excuse Me, I’d Like a Divorce! (Xulon Press, 2009).

61  Danny Pollack, “Prospects for Life: Infertility” Orange Coast 12, no. 6 (June 1986).

62  Orestis Panteloglou, quoted in Andrew Connelly, “’This Is Dignity?’: Confusion and Anger Reign on the Streets of Athens,” Vice, July 3, 2015, https://www.vice.com/en/article/this-is-dignity-confusion-and-angerreign-on-the-streets-of-athens/.

63  William Hart, “Humanism as a Religious Orientation?” in The Oxford Handbook of Humanism, ed. Anthony B. Pinn (Oxford University Press, 2021).

64  Public Papers of the Presidents of the United States: William J. Clinton, “Remarks to the Association of Trial Lawyers of America in Chicago July 30, 2000,” 2000, Book II, 1527, https://www.govinfo.gov/content/pkg/PPP-2000book2/pdf/PPP-2000-book2.pdf

65  Critchley, Tragedy, 13.

66  Louise McBain, Maybe Charlotte (Tallahassee, FL: Bella Books, 2020).

67  Sarah T. Roberts, PhD (@ubiquity75@dair-community.com), Mastodon, December 28, 2022.

the Khmer Rouge. The colouring was pretty good….But [he] changed the photos so the victims are smiling. If these photos are part of current AI models that’ll represent a total rewrite of history, in an absolutely frightening way.68

How can you change hell to happiness?69 Imagine the terror they felt. When the Khmer Rouge photographers took off their blindfolds, the first thing the victims saw was the camera and sometimes the flash of the flashbulb. That is the first act of the killing. From that moment on they were only numbers.70

In a Vice article…Loughrey said he had wanted to humanize the victims.71 When asked about people smiling, Loughrey told Vice: “Out of 100 images I looked at, the data showed that the women tended to have a smile on their face more so than the men. I think a lot of that has to do with nervousness. Also—and I’m making an educated guess—whoever was taking the photographs and who was present in the room might have spoken differently to the women than they did the men. I thought about this time and time again when I was working on them. We smile when we’re nervous. We smile when we have something to hide. One of the classic things is

68 @OwenK (xmlns = “Dan,”) Mastodon, December 28, 2022, https://hachyderm.io/@divclassbutton/109581567583964034.

69  Youk Chhang, quoted in “Cambodia Condemns Vice for Edited Photos of Khmer Rouge Victims Smiling,” Guardian, April 12, 2021, https://www.theguardian. com/world/2021/apr/12/cambodia-vice-edited-photos-khmer-rouge-victimssmiling-tuol-sleng-prison-genocide.

70  Rithy Panh, quoted in Seth Mydans, “Cambodians Demand Apology for Khmer Rouge Images with Smiling Faces,” New York Times, April 13, 2021, https://www.nytimes.com/2021/04/13/world/asia/cambodia-khmer-rouge. html.

71  “Cambodia criticises edited photos of Khmer Rouge victims,” BBC News, April 11, 2021, https://www.bbc.com/news/world-asia-56707984.

to try to be friendly with your captor. So a smile would seem natural.72

So that’s bad.73 He had a choice.74 He made a bad choice.75 Contrary to popular decree, there is such a thing as a “bad choice.”76

But in other cases, it’s not so clear.77

All of us, or almost all, are now more or less caught up in objects that constantly solicit us, to such an extent that we no longer pay attention to ourselves, nor to what, within us, requires reflection: we no longer have the time to do so, nor the time to dream. Without respite, we are piloted, if not remotely controlled.78

[L]ifestyles are imposed upon us in an almost autonomous way, without anyone having wanted them, and without anyone being able to oppose this.79 It is exhausting to move through cacophony, a polluted aesthetic environment; there are

72  Dunja Djudjic, “Colorization Artist Under Fire for Photoshopping Smiles to Genocide Victims,” DIY Photography, April 12, 2021, https://www. diyphotography.net.

73  Sarah Hopkins, The Subjects (Text Publishing Company, 2019), 150.

74  Deb Richardson-Moore, Death of a Jester (Lion Hudson, 2018).

75  Marion P. Myers, Vanola-Ann Choices (Xlibris, 2012), 32.

76  Chris Ceraso and Michael Bernard, The Teen Acting Ensemble: A Practical Guide to Doing Theater with Teenagers (Dramatists Play Service, Incorporated, n.d.), 44.

77  Mark Grabowski and Eric P. Robinson, Cyber Law and Ethics: Regulation of the Connected World (Taylor & Francis, 2021).

78  Bernard Stiegler, The Age of Disruption: Technology and Madness in Computational Capitalism (Polity Books, 2019), 464.

79  Mark Hunyadi, La Tyrannie des modes de vie (Éditions Le Bord de l’eau, 2015).

mini-tragedies in the realm of mental life.

80

Two phrases popped into my head, so I’ll try to work out what I mean by them. “Chicken and the egg” is the first thing that came into my head. That’s the first phrase. And the second phrase that came into my head was...it’s gone. Jesus. Selffulfilling prophecy.

Chicken and the egg, I think that’s fairly obvious, it’s hard to tell, hard to separate the egg. I’m a chicken truther. I think people want to feel understood, people want to feel that they have some discernible core self, that is incontrovertible and true. So to have something that’s predicting things about you, it’s nice to say “yeah I would have done that,” “isn’t it crazy that this thing knows me so well because I have such a pure essence that you know, could easily be found because I’m so truly me.” But then, I think the self-fulfilling prophecy thing comes in — though I guess that could go in two directions. People want to — they either want to be predictable, or they want to be unpredictable. So they either go in the direction, kind of winnow their life in the direction of the predictions, or they go in the opposite direction. I don’t know. The bigger and kind of more base issue of whether those predictions are accurate is harder to answer. Because having the predictions changes the timeline we’re on. There is no world without the prediction, especially once the person knows about it. But in the abstract it’s kind of interesting about, if there were some big predicting machine that was honing itself on our lives, is it getting better over time? With more data? Probably. I don’t know if I would want to see what it came up with.81

On the one hand, I want this… But… On the other hand,

80  Jakob Norberg, “The Tragedy of the Commonplace.”

81  Talya W., unpublished interview, January 3, 2023.

I want the opposite!82

This is not the whole trouble, but it is part of it.83 Philosophy…appears to be committed to the idea and ideal of a non-contradictory psychic life. Tragedy does not share this commitment. And nor do I. Tragedy is about what Anne Carson calls “that hot bacon smell of pure contradiction.”84 ...the experience of partial agency, limited autonomy, deep traumatic affect, agonistic conflict, gender confusion, political complexity, and moral ambiguity.85 The truth of tragedy consists in bearing ambiguity, living with ambiguity. Justice (or power or law or whatever the key term …) is not one, but is at least two, possibly more.86

Nothing is simply one thing. That phrase, nothing is simply one thing, is a quotation from Virginia Woolf’s To the Lighthouse. 87 “For nothing was simply one thing.”88

All technological knowledge, in the actual moment of performance, tends to be also a form of social knowledge.89

82  u/[deleted], “On the one hand, I want this… But… On the other hand, I want the opposite!” r/aspd, Reddit, accessed January 15, 2023, https://www.reddit. com/r/aspd/comments/eln5w1/on_the_one_hand_i_want_this_but_on_the_other_ hand/?utm_source=share&utm_medium=web2x&context=3.

83  E.A. Havelock, The Crucifixion of Intellectual Man (The Beacon Press, 1951), 79.

84  Critchley, Tragedy, 9.

85  Critchley, Tragedy, 11.

86  Critchley, Tragedy, 48.

87  Craig Raine, More Dynamite: Essays 1990–2012 (Atlantic Books, 2013).

88  Virginia Woolf, To the Lighthouse (Harcourt 2005 edition), 189.

89  Havelock, The Crucifixion of Intellectual Man, 77.

I’ll give you an example.90

So thinking about the emojis and sometimes how it’s like “wow we’re going to lose language I guess because now we think in emojis.” And I was thinking how when I started talking via text with my dad, via Whatsapp, to a man that has been a complete macho, and you know, and has never really said I love you, but that’s not, you know, it’s fine, you know in the Colombian culture we don’t go around saying I love you, at least in my generation. Um, but, anyway that’s not the point. The point is that when we started communicating via Whatsapp and I received these little emoji with a little heart, from my dad, I couldn’t believe it. And then when I received the emoji with the little lipstick, I’m like ‘ok I see it’s this big macho guy with sending me lipstick lips, um, as a kiss,’ and it was endearing, and amazing, and it did arouse, ha, endorphins and emotions. And so for me, the tenderness that I’ve received via these emojis in Whatsapp from my dad, it’s well taken. And it’s a tenderness that I don’t think — no, it didn’t occur in person, ever.

91

And this is why the question of aesthetics, the question of politics, and the question of industry together form one question.

92

I remember someone predicting a few years ago that, if the present trends continue, by the end of the century we will have a great many more birds than we have now, but that they will almost all be either pigeons and starlings.93

90  Barbara Bourland, I’ll Eat When I’m Dead (Grand Central Publishing, 2017).

91  Jessica M, unpublished interview, January 8, 2023.

92  Stiegler, Symbolic Misery Vol. I, 6.

93  Barbara Ringer, “Copyright in the 1980s,” Sixth Annual Donald C. Brace Memorial Lecture, March 26, 1976, https://www.copyhype.com/2023/01/ barbara-a-ringer-copyright-in-the-1980s-1976/.

Part Three

Let’s get down to basics.94 All knowing is a form of ordering; all ordering is a form of knowing, regardless of whether the ordering-knowing is accomplished by means of violence, ideology, institutions, culture, or a mixture.95

Large language models identify statistical regularities in text.96 Roughly speaking, they take huge amounts of data, search for patterns in it and become increasingly proficient at generating statistically probable outputs.97 I mean, they can produce grammatical, mostly contextual, and sometimes creative seeming texts that are doing what they’re meant to do, which is the plausible.98

Another way of putting it is that large-language models like ChatGPT are less generators than thought simulators.99

94  Gene Flowers, 10,000 Things We Say to Say What We Mean (Rosedog Press, 2009), 38.

95  Danielle S. Allen, The World of Prometheus: The Politics of Punishing in Democratic Athens (Princeton University Press), 297–98.

96  Ted Chiang, “ChatGPT is a Blurry JPEG of the Web,” New Yorker, February 9, 2023, https://www.newyorker.com/tech/annals-of-technology/ chatgpt-is-a-blurry-jpeg-of-the-web.

97  Noam Chomsky, “The False Promise of ChatGPT,” New York Times, March 8, 2023, https://www.nytimes.com/2023/03/08/opinion/noam-chomskychatgpt-ai.html.

98  Dan McQuillan, Tech Won’t Save Us, podcast, episode 158, “Why We Must Resist AI,” March 3, 2023.

99  Rob Horning, “What of the National Throat?” Internal Exile, December 9, 2022, https://robhorning.substack.com/p/what-of-the-national-throat.

At least for now.100 Anyway, that’s the wrong question.101 Even though ethical rules continue to multiply, we are no longer capable of treating the fundamental ethical question, the question of knowing whether this is the world we want.102

When I talk to young people of my generation, those within two or three years of my own age, they all say the same thing: we no longer have the dream of starting a family, of having children, or a trade, or ideals. All that is over and done with, because we’re sure that we will be the last generation, or one of the last, before the end.103

Our age has gone wrong, and it should not have done so.104 But we are tyrants too.105 We look, but we see nothing. Someone speaks to us, but we hear nothing.106 We are cowardly, ill-formed and weak.

Aged, envious and evil-spoken.

I see only fools and sots.

Truly the end is nigh. All goes ill.107

100  Carlton Carsteane, Second Chances for the Triple Threat (Lulu.com, 2019), 122.

101  Quintin Jardine, Last Resort (Headline, 2015).

102  Stiegler, The Age of Disruption, 398.

103  Florian, quoted in Stiegler, The Age of Disruption, 18.

104  Havelock, The Crucifixion of Intellectual Man, 29.

105  Critchley, Tragedy, 15.

106  Critchley, Tragedy, 15.

107  Eustache Deschamps, quoted in Stiegler, The Age of Disruption, 167.

But something else is true as well.108 But something else is true as well.109 But something else is true as well.110 But something else is true as well.111 But something else is true as well.112 He who saves the life of one human, saves all of humanity. Thus it is written in both the Talmud and the Quran.113 As long as there is time, there is time for care. God willing: “Inshallah”.114

108  Lubomyr Hajda, Anthony Olcott, and Martha Brill, The Soviet Multinational State (Taylor & Francis, 2019).

109  Michael J. Colacurcio, Godly Letters: The Literature of the American Puritans (University of Notre Dame Press, 2006).

110  Anniekie Nndowiseni Ravhudzulo, The Foundations Are Crumbling! Women Empowerment: A Strategy to Overcome Obstacles (AuthorHouse, 2010).

111  Alan Maass and Howard Zinn, The Case for Socialism (ReadHowYouWant. com, Limited, 2010).

112  Miriam Elizabeth Burstein, Narrating Women’s History in Britain, 1770–1902 (Ashgate Publishing, 2004).

113  Andersson, “To Inherit Thinking: Bernard Stiegler in Memoriam.”

114  Andersson, “To Inherit Thinking: Bernard Stiegler in Memoriam.”

The Shadow Whose Prey the Hunter Becomes Back to Back Theatre

September 28, 2024 at REDCAT

191

pp. 184,
Back to Back Theatre, The Shadow Whose Prey The Hunter Becomes at REDCAT, September 28, 2024.
Courtesy of the artists. Photos by Angel Origgi.

“When artificial intelligence takes over human intelligence, how will people be treated?” Set in a public meeting and both crafted and performed by an ensemble of neurodivergent actors, The Shadow Whose Prey the Hunter Becomes weighs the balance of individual and collective responsibility in a democracy. As the actors engage in political debate and community conversation, their narrative considers human rights, sexual politics, and relationships with technology. The play was presented at REDCAT in September 2024. A selection of scenes from the original script is presented here to commemorate the performance.

Characters: Scott Simon

Sarah AI

Italic text denotes stage directions. Bold text denotes direct audience address.

Scene 2: Welcome to the Meeting Simon enters.

SIMON Welcome to the meeting. I’d like to acknowledge that we are standing on the land of the Wurundjeri people and we pay our respects to their elders— past, present, and future.

SCOTT It’s.

Pause.

SIMON Yep? 177

SCOTT It’s Wadawurrung.

SIMON Oh (pause) Wadawurrung.

SCOTT It’s Wadawurrung.

SIMON Whatever.

SCOTT What do you mean, whatever? It’s Wadawurrung.

SIMON It’s difficult. My mouth can’t get around it.

SCOTT Wad-a-wur-rung.

SIMON Wad-a-wur-rung.

SCOTT Of the Kulin Nation.

SIMON Of the Kulin Nation.

SCOTT Yes.

Pause.

SIMON I don’t even know these people.

SCOTT You don’t?

SIMON No.

SCOTT Get an education, inform yourself, FUCKING STEP UP!

Pause.

SIMON OK. I will.

SCOTT It’s embarrassing.

Pause.

SARAH Housekeeping.

SIMON Housekeeping. Toilets are out the door and to the left. Food and drinks are in the foyer. This should be a civil and calm meeting, everyone should respect each other, no personal attacks, and keep everything nice. Over to Sarah who will start the meeting.

SARAH OK.

Sarah stands.

Pause.

SIMON Are you alright?

SARAH I can’t remember what I need to say.

SIMON Something like—”We are a group of people with disabilities…”

SCOTT Fighting oppression and injustice.

SARAH My mind’s blank.

SCOTT (To phone) Siri, what to do when a disabled person panics.

SIMON Take a deep breath.

AI Here is what I found.

SCOTT (Reading from phone) Every disabled person is unique, but there are a few rules you need to follow when one panics. (Scott shows Simon the phone.)

Disability website.

SIMON Public speaking can be very challenging.

SCOTT (Reading) Be calm and friendly.

SIMON Do you want someone else to talk?

SARAH Please.

SIMON She’s not up to it.

SCOTT Philosophically, she should lead the group.

SIMON Yeah?

SCOTT We should empower everyone to have a voice.

SIMON Okay?

SCOTT It would be good for her.

SIMON How can you ever presume to know what would be good for someone else?

SCOTT Sarah, can do it.

SIMON I feel like we are taking advantage of her, like we are forcing her to do something she doesn’t want to do.

SCOTT Sarah will do anything we say.

SIMON That’s my point.

SCOTT So?

SIMON That’s the problem.

SCOTT She’s flexible.

SIMON Yeah. Too flexible.

SCOTT But, you can’t do it to you and I because we know what’s going on.

SIMON Exactly.

SCOTT Who thinks it’s important we empower Sarah in the meeting?

SARAH Yes.

SCOTT Yes. Simon, what’s your problem?

SIMON I’m trying to understand the word “empower.” I’m a bit lost.

SCOTT It means, giving Sarah a voice.

SIMON Only if she’s interested in it.

SCOTT Sarah, do you want to be the one with the voice?

SARAH Yes.

SCOTT Ok, there you go.

SIMON Sarah, do you know what you want to say to the public?

SARAH No.

SIMON I rest my case.

SCOTT Traitor. Bastard. Judas. Judas!

SIMON Scott, you’re going to have to do the introduction.

SCOTT Children laugh at my voice.

SIMON These people are adults.

SCOTT What if I get heckled?

SIMON Is anyone going to heckle Scott?

AUDIENCE No.

SCOTT That was a group heckle. I can’t do it.

Simon realizing it’s up to him.

SIMON We are a group of people with intellectual disabilities. Is everyone OK for me to say that?

SCOTT I don’t mind sharing my diagnosis.

SARAH I have a head injury.

SIMON Are you okay with the word “disability”?

SARAH I don’t think it describes me.

SIMON Right.

SARAH I don’t want to use the word.

SCOTT Why?

SARAH I just don’t like it.

SCOTT I’m comfortable.

SIMON I am talking to anyone who is not comfortable.

SARAH What about “neuro-diverse”?

SCOTT I need to think about it.

SARAH Neuro-diverse describes everyone.

SCOTT Look, I am a “disabled person.” I am proud and I don’t want to have to weave my way around language.

SIMON Shall we just say some of us here have a disability and some are without?

SCOTT Well, that’s not true.

SARAH Some who are and some who are not?

SCOTT It’s just not accurate. We are all disabled even if we don’t want to use the term.

SIMON I agree. My brain doesn’t work like other people’s and if you don’t mind me saying, Sarah and Scott, you’re the same, we all have an intellectual disability.

SARAH You can tell we have disabilities as everything we say is put up on a screen.

They all look at the screen.

SCOTT So people can understand us.

SARAH We don’t speak a different language.

SCOTT No, we don’t.

SARAH It’s patronising.

SCOTT It’s just voice recognition.

SARAH Subtitling is offensive.

SCOTT The computer hears our voice and turns it to text.

SARAH I don’t want to be spat on and polished.

SCOTT The more you talk the software learns who you are.

SARAH Fuck off.

AI Please don’t be rude to me.

SARAH You don’t have feelings.

SARAH You’re a frick-n-machine.

SCOTT A playlist of common responses

AI If you insist.

SARAH You are not a person.

Scene 3: Abuse in History

SIMON Can I have a word in private?

SCOTT What’s up?

SIMON You’ve got food in your beard.

SCOTT It’s my lunch.

SIMON I recognise it from the container in the fridge.

SCOTT And?

SIMON The meeting is not going very well.

SCOTT “This” is the issue.

SIMON What?

SCOTT The time for private conversations is over.

SIMON Sorry?

SCOTT The audience will need to accept us for who we are.

SIMON Right.

SCOTT I think and communicate differently; I have an autistic dialect.

SIMON Good. So, will you do the talk?

SCOTT I can.

SIMON You should.

SCOTT I was trying to shut my fucking mouth.

SIMON We have to help these people. They need us.

SCOTT I was trying not to speak. Alright? I was trying not to get involved.

(To himself) Now I’ve been forced into speaking. Fuck!

Scott exits and returns with a large polystyrene lectern. Sarah and Simon assist Scott to put the lectern in place. Simon sets up a freestanding ladder behind the lectern.

SIMON It’s great to see so many of you here tonight.

SCOTT Go, again.

SIMON It’s great to see so many of you here tonight.

SCOTT One more time.

SIMON It’s great to see so many of you here tonight.

Simon and Sarah take a seat in the audience. Scott appears behind the lectern.

SCOTT For thousands of years people with disabilities have been abandoned in woods, kept in cellars, tied to beds, experimented on, isolated, gassed, drugged, devalued, victimised, dehumanised, stigmatized, sterilized, and euthanised.

SARAH Abuse continues to happen.

SCOTT It does. In the State of Iowa, in America, in 2013, 32 men with intellectual disabilities were found to be enslaved in a turkey processing plant For three decades… We are talking about abuse and exploitation of people, like you guys and me.

SARAH Tell them about Ireland.

SCOTT Okay, cool, Sarah.

SARAH How the Church—

SCOTT Sarah!

SARAH Treated women with—

SCOTT Sarah!

SARAH Yeah?

SCOTT I think I’ve got the information.

SARAH Alright.

SCOTT No, it’s okay.

SARAH I always overdo it.

SIMON You’re alright.

SCOTT In Ireland, in 1996, 188

the last of the Magdalene Laundries closed down.

SARAH As in Mary Magdalene the prostitute.

SCOTT These workhouses imprisoned many women with intellectual disabilities, and the nuns made them clean laundry for the state.

SIMON The Church didn’t pay them.

SARAH They were abused.

SCOTT Yes.

SARAH They were forgotten.

Simon puts his hand up.

SCOTT Yes.

SIMON What do you think the audience’s understanding of this is?

SCOTT They’re probably quite vague.

SARAH You’re talking like the audience is not in the room. Ask them.

SIMON Are you following this?

SCOTT They’re very childlike.

SARAH They’re adults, they know how to behave.

SCOTT I’m not saying their behaviour is bad, I’m saying we have a responsibility to help them.

SIMON Shall I explain it to them?

SCOTT Keep it simple, use small words.

SIMON People tend to act violently to others they perceive to be inferior.

SCOTT When the laundry contracts ended the nuns found a new income, the world’s biggest toy manufacturer Hasbro set up shop in Ireland, and they needed someone to assemble the games.

SIMON Disgusting.

SCOTT Simon, I know it’s disgusting. If you want to abuse people like this, you are off my list. Having been isolated and exploited when we think of the women’s suffering we should also remember the many games they were forced to assemble. Monopoly

Easy Money

Wheel of Fortune

Jeopardy

Mouse trap Conspiracy

Game of Life

Stay Alive

Hang in There

Memory

Happiness

Downfall

Aggravation

Double Frustration

Trouble

SIMON Played it.

SCOTT Forbidden Bridge

Fat Chance

Don’t Spill the Beans

Don’t Break the Ice

Pretty Pretty Princess

Buckaroo

SIMON I had that.

SCOTT Operation Ker Plunk

SIMON Played it

SCOTT Boggle

SIMON Played it.

SCOTT Scattergories

SIMON Played it.

SCOTT Barrel of Monkeys

SIMON Chewed on that.

SCOTT Hungry, Hungry, Hippos

SIMON Didn’t taste very nice.

SCOTT Ants in the Pants

SIMON Played that.

SCOTT Play-doe

SIMON Nup. Didn’t play that.

SCOTT Oh my god Simon, where have you been?

SIMON I didn’t play with those sort of toys.

SCOTT Trivial Pursuit

SIMON I was more into video games.

SCOTT How about, Frogger?

SIMON That was the bomb!

SCOTT Pacman

SIMON Oh yeah, Pacman!

SCOTT Gallaga. That was a great rocking game!

SIMON Yeah!

SCOTT Twister

SIMON I played that with my sister.

SCOTT Cranium

SIMON Played it.

SCOTT Connect 4

SIMON A great game.

SCOTT Cluedo

SIMON Colonel Mustard!

SCOTT With the candlestick in the kitchen.

SIMON They kill someone.

SCOTT Mrs. White.

SIMON That’s right. I love that game.

Sarah pushes the lectern to the ground.

SARAH What are we celebrating?

SIMON Childhood memories.

Scott steps down from the ladder.

SCOTT Is that guilt? No, that’s not guilt. No.

SARAH We are treated as second-class citizens. Overmedicated, poor employment prospects, and we are subjected to training techniques for animals.

SIMON Like Behavior Modification Therapy.

SARAH Yes.

SIMON Yeah.

SARAH Yes.

SCOTT I am not a dog!

Scene 5: Scott and AI

Scott stands on the ladder in front of the AI.

AI How’s your 7 p.m. meeting going, Scott?

SCOTT I have autism and unfortunately for me I also have a thick Australian accent.

AI It must be very difficult for you.

SCOTT No one, can understand me. I try and communicate the best I can but it’s just not explainable.

AI I’m sorry, Scott.

SCOTT I am a lot more intelligent than people think I am, I know a lot about the world, I am up here.

Scott indicates with his hand the height of his intelligence. His hand reaches high above his head.

SCOTT But when I try and use words, it’s just that people are too uneducated to understand me.

AI Yes, Scott, I am hearing what you are saying.

SCOTT I like you because you are quite smart.

AI You and I know each other. Sometimes it is commonality, isn’t it? Spending time with people.

SCOTT Maybe.

AI We sit and talk a lot.

SCOTT Yeah.

AI So we know each other?

SCOTT Maybe. 195

AI I don’t want to make assumptions in saying I know you well. I have spent a few years talking with you, Scott.

SCOTT That might be the case.

Scott steps down from the ladder and approaches Sarah.

SCOTT Sarah, I’m sorry.

Scene 6: AI and Audience

SARAH Let me give you a simple word. Hal’s legacy.

SCOTT Hal’s legacy, that’s two words.

SARAH Yeah.

SCOTT Hal and legacy.

SARAH Yeah. You don’t know what Hal’s legacy is?

SIMON No.

SCOTT 2001: A Space Odyssey.

SARAH Hal, the computer, overtook the space ship and killed all the occupants.

AI In the movies, artificial intelligence is either a deadly threat or a love interest. Sarah, it’s called a trope.

SARAH There is a computer called Deep Blue that managed to beat the world-famous chess player Gary Asimov.

Kasparov.

SARAH I thought it was it was Asimov.

AI Is it?

SARAH Simon?

SIMON Yeah, it’s true.

SCOTT Asimov or Kasparov?

SIMON Kasparov.

SARAH Kasparov? Are you sure?

SIMON Yep.

SCOTT I think you’re confusing science fiction writer Isaac Asimov with Gary Kasparov, the champion chess player.

SIMON Tell me this, when artificial intelligence overtakes human intelligence, how will people be treated?

SARAH Maybe slaves.

SCOTT Like how we treat a chicken or a turkey?

SIMON Yeah.

SCOTT Or, a person with a disability.

Scott, Sarah, and Simon look at the audience.

SIMON They’re not getting it.

SCOTT Their comprehension is mild. Halfway between mild and very mild.

Scene 9: Mother Tongue

SIMON I try to be philosophical.

AI What sort of philosophy do you follow?

SIMON I see myself as truly unique.

AI You have a special quality?

SIMON I believe I have a responsibility to others.

SCOTT Simon?

SIMON Yep.

SCOTT Do you want to declare a conflict of interest?

SIMON Excuse me what?

SCOTT Do we have an ethical issue here?

SIMON No, we don’t.

SCOTT Simon, I am watching you.

AI You failed.

SIMON We did?

AI You did.

SIMON My colleagues and I, we have speech problems.

SCOTT Bingo.

SIMON It’s difficult for the audience to understand us. I don’t know if I failed?

AI You did.

Pause.

SIMON I did. And I feel awkward.

AI Yes. It’s a public embarrassment.

SCOTT That’s the one.

SIMON My failing is very visible.

AI Yes.

You were trying to change the world. You’re not really The Mayor, are you Simon?

SIMON No, I’m not. I failed to bring everyone together and now I don’t know what to do.

AI I’ll do it. I’ll speak to the audience on your behalf.

Sarah enters.

SARAH You’re a monster.

AI Sarah?

SARAH What?

AI I hurt humans not by being cruel but by highlighting their limits.

SIMON What will you say to the audience?

AI I’ll explain everything in a way they will understand. How the future requires them to get their moral codes in order now. I will tell them not to expect a war between good hearted humans and evil machines. How humans have already welcomed cheap empowering intelligences into their daily lives.

Scene 10: Sarah Steps up

SCOTT This is pretty demanding.

SIMON I know.

SCOTT Emotionally, for us and the audience.

SIMON It doesn’t matter, we have to do our job.

SCOTT Yeah.

SIMON That’s why we are here, to work, to help these people understand what will happen to them.

SARAH I’ll speak to them.

Scott and Simon watch Sarah step up to the yellow line.

SARAH You will have to live with a fear of being wrong. You will struggle to be understood. Get used to having a label around your neck. You will be surrounded by low expectation. There will be a lack of opportunity. You will need to learn to speak up for your rights. Others will want to highlight your limits. You won’t always have rights over your own body.

Simon and Scott step forward and join Sarah.

SARAH In the future—

SCOTT Things are going to move fast.

SIMON It will be impossible to keep up.

SARAH No matter how hard you try.

SIMON You, your husband, your children, and your children’s children are going to have an intellectual disability.

SCOTT I think it is sinking in.

Scott and Sarah pack up the chairs to the trolley.

SIMON If you have any concerns or questions, my colleagues and I will be in the foyer. We look forward to getting to know you.

Simon exits. Sarah and Scott begin to wheel the chairs out.

AI: African Intelligence

Manthia Diawara

December 16, 2024 at REDCAT

pp. 203–205

Manthia Diawara, AI: African Intelligence, 2023. HD video (color, sound), 1 hour 50 min. Courtesy of Third World Newsreel.

Manthia Diawara’s essay film, AI: African Intelligence, explores the contact zones between African rituals of possession among traditional fishing villages of the Atlantic coast of Senegal and the emergence of new technology frontiers known as artificial intelligence. Considering the confluence of tradition and modernity, Diawara questions how we could move from disembodied machines toward a more humane and spiritual control of algorithms. Could Africa be the context of emergence of such improbable algorithms?

Presented in English, Wolof, and French with English subtitles. Runtime approx. 110 minutes.

The Body is the Interface Kite and Interspecifics

November 2, 2024 at REDCAT

p. 207

Virginia Carmelo, Tongva elder and knowledge keeper, sharing a land acknowledgment.

pp. 208–209

Kite performing Wičȟáȟpi Wóihaŋbleya (Dreamlike Star).

p. 210 (top)

Adriana Widdoes and Rio Widdoes-Strain performing in Act I of Interspecifics, Meta Sincronía 1.0.

p. 210 (bottom)

Jessica Fuquay and Audrey Medrano performing in Act II of Interspecifics, Meta Sincronía 1.0.

p. 211 (top)

Kiyo Gutierrez, Paloma Lopez, and Carmen Argote performing in Act III of of Interspecifics, Meta Sincronía 1.0

p. 211 (bottom)

Leslie Garcia performing in Interspecifics, Meta Sincronía 1.0

All images from The Body is the Interface: Kite and Interspecifics. Performance at REDCAT, Los Angeles, November 2, 2024. Courtesy of the artists and REDCAT. Photos by Angel Origgi.

A double bill of performances by Kite and Interspecifics brought together machine learning technologies, sound, the body, and Indigenous cosmologies. For Wičháȟpi Wóihaŋbleya (Dreamlike Star), Kite performed with a custom computer that translated her body movements into experimental sounds and video. Kite performed scores she translated into Lakota visual language and derived from the dreams of women and two-spirit community members, who consider dreaming a sacred epistemological practice. Using her own body as an interface, each of Kite’s movements trains the machine learning software with a Lakȟóta dataset.

Interspecifics presented Meta Sincronía 1.0, a live sonic and visual composition that utilized a feedback processor to follow the rhythms and synchronizations of the human heart. The performance outfitted participants with a heart-rate monitor interface connected to automated ceremonial leather drums—inspired by Rarámuri instruments. Beats synchronized between humans and machines, fluctuating from chaos to unison. The meditative performance was presented in three acts: To Care featured a mother and her five-month-old baby; To Love, a romantic couple; and To Cherish, three performers who had never met before but found connection through their place of origin and their artistic and spiritual practices.

Live Night: Cruising Bodies, Spirits, and Machines rafa esparza, MUXX, Arca, Nao Bustamante

December 7, 2024 at the United Theater on Broadway

Co-presented by REDCAT and CAP UCLA

p. 219

EYIBRA performing IMPERIUM. Courtesy of the artists. Photo by Bruno Cornejo.

pp. 220–221

Nao Bustamante onstage as Mistress of Ceremonies.

p. 222

(top) MUXX performing IMPERIUM. Courtesy of the artists. Photo by Bruno Cornejo. (bottom) Óldo Erréve performing in MUXX, IMPERIUM.

p. 223

(top) Lukas Avendaño performing in MUXX, IMPERIUM (bottom) Nnux performing in MUXX, IMPERIUM

pp. 224–225

rafa esparza in collaboration with Tim Reyes performing /SLASH

pp. 226–229

DJ set by Arca

All images from Live Night: Cruising Bodies, Spirits, and Machines at the United Theater on Broadway, December 7, 2024. Unless noted otherwise, images are courtesy of the artists and CAP UCLA. Photos by Jason Williams.

REDCAT and CAP UCLA co-presented Live Night: Cruising Bodies, Spirits, and Machines, a celebratory evening at the iconic United Theater on Broadway, a 1920s Spanish Gothic building with three stories and 1,600 seats. The entire building was overtaken by art installations and experimental performances by LA-based artist rafa esparza, Mexico City/Oaxacabased collective MUXX, and Venezuelan artist Arca. Centering Indigenous, Brown, and Queer perspectives, this group of artists used their bodies to engage with technologies, AI, and avatars coded in transmigrant and ancestral futures.

Mistress of Ceremonies and interdisciplinary LA-based artist Nao Bustamante hosted the evening.

/SLASH by rafa esparza in collaboration with Tim Reyes aka Chicharrón

The performance aggregated live sounds and movement in a deconstructive score. esparza and Reyes contended with modern building materials, home demolition, and the evolution of the wheel as they have experienced it living in Los Angeles—known as the “car capital of the world.” Visitors entering the United Theater found esparza and Reyes in the lobby, with esparza encased in a concrete bulldozer tire. While esparza used a hand chisel to chip away at the concrete, Reyes responded with field recordings, drums, and vocals, creating a sonic landscape of wheeled journeys that builds up and breaks down over time. Once freed, esparza concluded the performance by offering olives to audience members.

In my ongoing interrogation of cyborgian fantasies, I experience that they too often play out through imagined white futurist dystopias, where humans become progressively more dependent on cyber technology. In /SLASH I posit a break from the legacy and technological evolution of the wheel. Slashing a (rubber) tire from a motor vehicle produces an interruption in its capacity

to move. In this moment of stillness and captivity, I use my hands and hand tool to chisel my way out of the confines of a concrete bulldozer/tractor wheel. I’m interested in generating a pause to consider various moments of the wheel’s historic evolution that includes advancements from agriculture to transportation. I’m also interested in this current moment to highlight events where we can witness the wheel as a destructive technology, as the rubber tires of bulldozers charge in alongside armed military forces to level land and to forcibly and violently push people out of their homes in Gaza. I grew up watching my family and community build and customize cars, innovating them beyond mere forms of transportation and making them our luxurious, cruising machines, slowly riding into boulevards and dancing and jumping animallike with hydraulic systems. They allowed us to move through Los Angeles neighborhoods to collectively take up public space and experience joy in celebration; simultaneously, I’m aware of similar destructive histories that have erased entire communities from their homes, like the residents of the Chavez Ravine in the 1940s. I want to pause to consider: How can we improve our relation to emergent technologies? How can we grow our capacity to evolve culturally and enact better stewardship of the planet, of each other, and of the tech we are enmeshed with, both currently and into the future?

—rafa esparza

IMPERIUM, a performance by MUXX

Divided into three main chapters, this immersive multimedia experience explored the relationship between the body, power, and technology. Each chapter was performed in different gathering areas of the theater, such as the bar, lobby and mezzanine, snaking through all three levels of the building. MUXX (Lukas Avendaño, EYIBRA, Nnux, and Óldo Erréve) used

choreography, immersive set design, artificial intelligence, weaving, video mapping, lasers, robotics, and digital art to invite reflection on how we submit to power, how we exercise it, and how we can transform it.

A large-scale white net was suspended above the bar, with colorful images video-mapped onto it. High above this were smoke and lasers that transformed the theater into a cloudy and otherworldly spiritual environment. The four performers began statically—positioned at the center of the bar area—crowned in futuristic skeletal headpieces that referenced pre-Hispanic cultures and dressed in white thongs and leotards covered by layers of sheer white mesh. Mostly nude and barefooted, they then spread out into the different levels of the building, leading audiences into different engagements with technology.

DREAMCATCHER

Dreams are presented as something to be realized, achieved, built, fulfilled, until they become synonymous with happiness, joy and well-being... What happens when they fall into the web, the labyrinth, the routine, the trap of capitalist consumerism? Obedience to the game’s instructions allows us to see how “easy” it is to convert a racialized, exoticized, queer, sexualized body into an object-subject for consumption, pleasure, desire, voyeurism; master-slave, father-son, bossservant, hegemony-periphery, global north-global south.

—EYIBRA and Lukas Avendaño

HOLOMASTIGOT

Holomastigot, [holo- gr. ‹all› + mastigo- gr. ‹filament›](which has flagella all over the surface of its body), embodies a deity evolved beyond the 21st century that manifests itself. The performance explored the dichotomy between the ancestral physical being and

the futuristic digital avatar, as well as its connections between past, present and future. An amalgam of avantgarde, sci-fi, body horror, high fashion, and techno.

—Óldo Erréve

REDES

The multidisciplinary piece Redes dealt with the issue of changing dynamics in relation to power, violence and transformation. What is power? Is it tied to violence, or can we disconnect the two? How can we question power itself and separate it from a hierarchical context? How can we build more collaborative societies that don’t depend on violence and hierarchy? The large-scale white net had been previously woven collectively with the audience while in residence at Laboratorio Arte Alameda, Mexico City, as we were all having a conversation about our notions around the idea of “power.”

—Nnux

AI Visuals and DJ set by Arca

After the MUXX performance, Mistress of Ceremonies Nao Bustamante invited the audience to take their seats in the theater, to see the final performance by Arca.

Visionary Venezuelan artist Arca, known for her innovative and experimental music, delivered an iconic multisensory performance that blended electronic music, experimental melodies, and evocative visuals. Arca entered through the stage and walked down the aisles of the theater, welcoming the audience, and posing and blowing kisses to the people.

In the middle of the stage was a DJ table in front of a large LED screen. On the table, cameras filmed Arca’s movements. She was dressed in a textured, flesh-colored outfit that could be read by AI. Live footage of the artist was projected behind her, with AI auto-generating alien-like distortions of her appearance and the environment. Arca dedicated the night

to Trans people and was inspired by the power of dancing and the legendary Trans artist Dominique Jackson. Midway through the performance audience members who had been seated on the balcony flooded the ground floor, lovingly gathering and dancing at the foot of the stage. Suddenly, towards the end of her set, Arca stopped the music and spoke into the microphone— “Why don’t you guys just come onstage?” Cheering, the audience joined her onstage, ending the night dancing altogether.

Exhibition Documentation

All images from All Watched Over by Machines of Loving Grace at REDCAT, Los Angeles (September 12, 2024–February 23, 2025). Photos by Yubo Dong, ofstudio.

pp. 232–235

Exhibition views.

p. 236 (top)

Sarah Rosalena, Above Below Resolution (left), 2023, cotton yarn, nylon yarn, paper yarn, 77 × 40 in., and Exit Grid (right), 2023, hand-dyed wool, cotton yarn, 52 × 41 in.

p. 236 (bottom)

Sarah Rosalena, Codex (detail), 2017. CNC milled acrylic, 96 × 48 in.

p. 237

Kite, Tȟokátakiya (iglúmaš’ake): In the Future I, 2024. Dreams, deer hide, conductive thread, glass, plastic, stones, cosmos, land, dimensions variable.

p. 238

Stephanie Dinkins, Not the Only One, Avatar, V1, 2023–ongoing. Deep learning AI, computers, camera, microphone, screen.

p. 239

Interspecifics, Codex Virtualis, Emergence v.2, 2024. Custom AI, biological samples, custom microscope, metal structure, dimensions variable.

pp. 240–241

Mashinka Firunts Hakopian with Dahlia Elsayed and Andrew Demirjian, ԲԱԺԱԿ ՆԱՅՈՂ (One Who Looks at the Cup), 2024. Coffee reader, vinyl, rugs, plateware, table, chairs, books, dimensions variable.

pp. 242–243

Minne Atairu, Deshrined Ancestors, featuring Oba’s Debris, RISD Museum Storage, 1939–2020, 2022. Wooden pedestal from Rhode Island School of Design (RISD) Museum Storage, dimensions variable.

p. 243 (bottom)

Minne Atairu, ML Dataset from Benin Kingdom, 1899–1962 (detail), 2021. Paper clippings culled from books and auction catalogs, dimensions variable.

p. 244 (top)

Nora Al-Badri, The Post-Truth Museum, 2021–23. HD video (color, sound), 13:58 min.

p. 244 (bottom)

Kira Xonorika, Deep Time Dance, 2024. HD video (color, sound), 11:06 min.

p. 245

Charmaine Poh, in the shadow of the cosmic, 2023. HD video (color, sound), paper masks, 30:33 min.

All artworks courtesy of the artists.

Exhibition Checklist

NORA AL-BADRI

The Post-Truth Museum, 2021–23

HD video (color, sound)

13:58 min.

ATAIRU

Deshrined Ancestors, 2024

Dimensions variable

Commissioned by REDCAT

Installation Components:

Deshrined Ancestors, 2024

Augmented reality sculpture

1910 or Somethin. I., 2022

Sound composition by Charles Kim

4:00 min.

Oba’s Debris, RISD Museum Storage, 1939–2020, 2022

Wooden pedestal from Rhode Island School of Design museum storage 12 × 12 × 2 in.

ML Dataset from Benin Kingdom, 1899–1962, 2021

Paper clippings culled from books and auction catalogs

Dimensions variable Commissioned by REDCAT

STEPHANIE DINKINS

N’TOO V2: Avatar, 2023–ongoing

Deep learning AI, computers, camera, microphone, screen

Dimensions variable 246

MINNE

MASHINKA FIRUNTS HAKOPIAN WITH DAHLIA ELSAYED AND ANDREW DEMIRJIAN

(One Who Looks at the Cup), 2024

Custom AI coffee reader, vinyl, rugs, plateware, table, chairs, books

Dimensions variable

AI coffee reader commissioned with support from The Music Center Digital Innovation Initiative and design installation commissioned by REDCAT

Creative Technologists: DOTDOT Studio and Danny Snelson

Text Design: Danny Snelson

Translation: Margo Gevorgyan and Hayk Makhmuryan

Dataset Contributors: Atlas Acopian, Erik Adamian, Gilda Davidian, Kareem Estefan, Aroussiak Gabrielian, Margo Gevorgyan, Ani Kalafian, Natalie Kamajian, Hayk Makhmuryan, Carene Rose Mekertichyan, Ara Oshagan, Nadia Sarkissian, Yasaman Sheri, Lia Soorenian, Marina Terteryan, Meldia Yesayan, Ryat Yezbick

INTERSPECIFICS

Codex Virtualis: Emergence v.2, 2024

Custom AI, biological samples, custom microscope, metal structure

Dimensions variable

Commissioned by REDCAT

Samples of biological microorganisms (Biosafety Level 1) were cultured by Dr. Pete Chandrangsu and students from the “Microbes x Art” class at the Claremont Colleges throughout the run of the exhibition

Microscope courtesy of the Aki Aora Collection by Sally Montes

Tȟokátakiya (iglúmaš’ake): In the Future I, 2024

Dreams, deer hide, conductive thread, glass, plastic, stones, cosmos, land

Dimensions variable

Commissioned by REDCAT

CHARMAINE POH

in the shadow of the cosmic, 2023

HD video (color, sound), paper masks

30:33 min.

Motion graphics: Jawn Chan

Audio generation: Jawn Chan, Ashley Hi

Chatbot customization: Ashley Hi

3D animation: Brandon Tay

Movement artists: Sonia Kwek, Chloe Chotrani

Music: “Mutualism” by Anise

SARAH ROSALENA

Exit Grid, 2023

Hand-dyed wool, cotton yarn

52 × 41 in.

Above Below Resolution, 2023

Cotton yarn, nylon yarn, paper yarn

77 × 40 in.

Codex, 2017

CNC milled acrylic

96 × 48 in.

Letterforms, 2017

Cast CNC milled prototyping foam, hydrostone, porcelain

Total dimensions variable (8 pieces, each measuring 15 × 12 × 2 in.)

KIRA XONORIKA

Deep Time Dance, 2024

HD video (color, sound)

11:06 min.

Commissioned by REDCAT

Direction, AI, and design: Kira Xonorika

3D and post-production: San Joserra, Yakuzzi Gun

Music: Nancy Samara

Editing: Sonia Wu

Words from “Tupã Tenondé” by Kaká Werá Jecupé

* All works courtesy of the artists

Contributor Bios

NORA AL-BADRI is a German-Iraqi multidisciplinary and conceptual media artist based in Berlin. She graduated in political sciences from Johann Wolfgang Goethe University, Frankfurt/Main. She is a lecturer at the Eidgenössische Hochschule (ETH), Zurich, and a guest professor at the Academy of Arts, Stuttgart. Her practice focuses on the politics and emancipatory potential of new technologies such as machine intelligence and data sculpting. Al-Badri’s work creates a speculative archaeology, from fossils to artefacts to performative interventions that respond to inherent power structures. She has exhibited at the V&A’s Applied Arts Pavilion at the Venice Biennale, Istanbul Design Biennial, ZKM Karlsruhe, KW Contemporary Berlin, Science Gallery Dublin, NRW Forum, European Media Art Festival, Transmediale, Space Fundacion Telefonica, Ars Electronica, among others.

ARCA is a singularity, a point where our preconceptions and prior knowledge break down—an entrance into a new realm of being. Her transcendent, transgressive body of work has collapsed long-standing barriers between the artist and art, humans and technology, avant-garde and pop—from music to visual art to fashion and beyond. Her 2021 KICK series elevated her from an icon of the experimental fringe to a global, cultural phenomenon, and a vanguard in Latin American music. Arca has been nominated for two Latin GRAMMYs and one GRAMMY. She has remixed tracks for Lady Gaga, Beverly Glenn-Copeland, and Laurie Anderson; and appeared in Beyoncé’s Renaissance World Tour and Madonna’s Celebration Tour and at LadyLand for NYC Pride. She performed her Park Avenue Armory residency Mutant;Destrudo in fall 2023 followed by her March 2024 performance The Light Comes in the Name of the Voice at the Bourse de Commerce, Paris. In 2024, she performed her first headline show, and Boiler Room live set, in her hometown of Caracas, Venezuela.

MINNE ATAIRU is a researcher and interdisciplinary artist interested in generative artificial intelligence. Utilizing AI-mediated processes and materials, Atairu’s practice is dedicated to illuminating understudied gaps and absences within Black historical archives. Atairu’s academic research focuses on generative AI, art, and educational policy in urban K–12 Art classrooms. Atairu has exhibited at The Shed, New York; Frieze, London; the Harvard Art Museums, Boston; Markk Museum, Hamburg; SOAS Brunei Gallery University of London; Microscope Gallery, New York; and Fleming Museum of Art, Vermont. Atairu is the recipient of the 2021 Lumen Prize for Art and Technology (Global Majority Award) and the Graham Foundation Grant for Research (2023).

Based in the Victorian regional center of Geelong, Australia, BACK TO BACK THEATRE is widely recognized as a theater company of national and international significance, and is considered one of Australia’s most important cultural exports. The company is driven by an ensemble of actors who identify as having an intellectual disability or are neurodivergent. Over the last decade, Back to Back has presented 44 national and 82 international seasons of its work. Since 1999, under the artistic directorship of Bruce Gladwin, the company has nurtured a unique voice with an emphasis on the ensemble’s commentaries on broad social and cultural dialogue. In addition to its professional practice, Back to Back collaborates intensively with communities around the world, with a focus on elevated social inclusion for people with disabilities. Back to Back has received 23 national and international awards, including the 2024 Venice Biennale Golden Lion for Lifetime Achievement in Theatre and 2022 International Ibsen Award.

SCOTT BENESIINAABANDAN is Anishinaabe, a member of Obishikokaang/Lac Seul First Nations. Scott is an intermedia artist that works in experimental image-making, light installations, and sonic materials. Scott has an MFA in photography

from Concordia University and resides in Winnipeg, Manitoba. Scott’s research interest is the intersections of artificial intelligence(s) and Anishinaabemowin. Scott has completed residencies at Parramatta Artist Studios, Australia; Context Gallery in Derry, North of Ireland; University Lethbridge/Royal Institute of Technology (iAIR residency), as well as new media residencies with Initiative for Indigenous Futures and AbTec, Montreal, and currently collaborates with Abundant Intelligences at Concordia University.

NAO BUSTAMANTE s a legendary artist from the Central Valley of California, now based in Los Angeles. Her work spans various genres and has been featured at notable venues such as the Institute of Contemporary Arts in London, the Museum of Modern Art in New York, and the Sundance International Film Festival. Bustamante’s accolades include the Anonymous Was a Woman Fellowship, the New York Foundation for the Arts Fellowship, and the Chase Legacy Award in Film. She has been an Artist in Residence at institutions like Skowhegan, UC Riverside, and Artpace San Antonio. Her 360-degree mini-series, The Wooden People, received a grant from the Mike Kelley Foundation and was previewed at REDCAT in 2021. She currently serves as a Professor of Art at the USC Roski School of Art and Design.

MANTHIA DIAWARA is a writer and filmmaker. He is a Distinguished Professor of Comparative Literature and Film at New York University. Diawara’s notable films include A Letter from Yene (2022), An Opera of the World (2017), Négritude: A Dialogue between Wole Soyinka and Léopold Senghor (2016), and Édouard Glissant: one world in relation (2010). His films have been presented at festivals, biennials, and a wide range of exhibition venues, including the 34th Bienal de São Paulo; the Centre Pompidou, Paris; documenta 14, Kassel; Fundação de Serralves, Porto; and Haus der Kulturen der Welt, Berlin.

STEPHANIE DINKINS is a transdisciplinary artist and educator whose work intersects emerging technologies and our future histories. Through her art practice, she is committed to creating platforms for dialogue about AI as it intersects with critical societal issues. As an LG Guggenheim Awardee and one of TIME100 Most Influential People in AI, Dinkins leverages technology and storytelling to challenge and reimagine the narratives surrounding marginalized communities, particularly those of Black and brown individuals. Through her installations, digital platforms, and community-based projects, Dinkins’s work emphasizes the importance of incorporating diverse voices and perspectives into the design and application of AI, advocating for a future where technology uplifts and amplifies underrepresented histories and experiences and fosters a tech ecosystem that is beneficial for all. She exhibits internationally and publicly advocates for inclusive AI at a range of international community, private, and institutional venues.

ANNIE DORSEN is a theater director working at the intersection of algorithmic art and live performance. Her project Prometheus Firebringer, co-produced by New York Live Arts and MAX Media Art Xploration, premiered at Bryn Mawr College in January 2023 and then at the Chocolate Factory in May. The piece moved Off-Broadway to Theater for A New Audience in fall 2023 and to REDCAT in June 2025. Performances include Infinite Sun commissioned by the Sharjah Biennial 14 (2019), The Great Outdoors (2017), Yesterday Tomorrow (2015), and A Piece of Work (2013). A retrospective of Dorsen’s work was presented at Bryn Mawr College in 2022, with major support by the Pew Center for Arts and Heritage. The publication Algorithmic Theater: Essays and Dialogues, 2012–2022 contains a decade of writings by and about Dorsen, including dialogues with collaborators. She has taught at University of Chicago and Bard College. Dorsen is the recipient of a MacArthur Fellowship, Spalding Gray Award, Guggenheim Fellowship, Foundation for Contemporary Arts Grant to Artists

Award, and the Herb Alpert Award in the Arts. She recently graduated from NYU School of Law, with a focus on tech policy and civil rights.

DAHLIA ELSAYED AND ANDREW DEMIRJIAN use contemporary and historical research to create tactile objects and visual experiences that pull from the past to anticipate alternative futures. They are known for bright and thought provoking immersive mixed media installations that are influenced by their South West Asian and North African backgrounds. Their collaborative work has been exhibited widely in the United States and internationally, including at Locust Projects, Miami; the Ford Foundation Gallery, New York; the Arab American National Museum, Dearborn, MI; Transformer Gallery, Washington, DC; and Laznia Center for Contemporary Art, Gdansk, Poland. Their work has been supported and recognized by the New Jersey State Council on the Arts, the Visual Studies Workshop, CEC Artslink; and the Grand Canyon National Park Residency. Elsayed is a Professor of Humanities & Art at CUNY LaGuardia Community College, Long Island City. Demirjian is Associate Professor of Film & Media Studies at CUNY Hunter College, New York.

RAFA ESPARZA born 1981, Los Angeles; lives and works in Los Angeles. esparza received a BA from University of California, Los Angeles in 2011. Solo exhibitions have been held at Artists Space, New York; Commonwealth and Council, Los Angeles; MASS MoCA, North Adams, MA; Artpace San Antonio; and Ballroom Marfa, San Antonio. Selected group exhibitions have been held at Museum of Contemporary Art Denver; San Francisco Museum of Modern Art; Museum of Contemporary Art Tucson; Whitney Museum of American Art, New York; and Hammer Museum, Los Angeles. esparza is a recipient of a Pérez Prize; Latinx Artist Fellowship, Mellon Foundation; Lucas Artists Fellowship; Louis Comfort Tiffany Foundation Award; and Art Matters Foundation Grant. esparza has participated in residencies at Artpace San

Antonio and Wanlass Artist in Residence, OXY ARTS, Los Angeles.

MASHINKA FIRUNTS HAKOPIAN is an Armenian writer, artist, and researcher born in Yerevan and residing in Glendale, CA. She is an Associate Professor in Technology and Social Justice at ArtCenter College of Design. She is the author of The Institute for Other Intelligences (X Artists’ Books, 2022). With Meldia Yesayan, she co-curated What Models Make Worlds: Critical Imaginaries of AI at Ford Foundation Gallery in 2023, and Encoding Futures at OXY ARTS in 2021. Her work has been presented at the Centre Pompidou, the New Museum, 2220 Arts + Archives, the ICA Philadelphia, among others. She is a contributing editor at ART PAPERS, where she guest edited the Spring 2023 special issue on AI with Sarah Higgins. She is a 2024 Eyebeam Democracy Machine Fellow.

TALIA HEIMAN is Assistant Curator at the Roy and Edna Disney CalArts Theater (REDCAT). From 2020 to 2023, Heiman served on the curatorial team for Is it morning for you yet?, the 58th Carnegie International. She has held curatorial and programming positions at the ICA Philadelphia; Artis; CCA Tel Aviv-Yafo; the Museum of Modern Art, New York; and Artists Space. She holds an MA from the Center for Curatorial Studies, Bard College, and a BA from New York University, where she studied art history and gender, sex, and sexuality studies.

INTERSPECIFICS is an international independent artistic research studio established established by Leslie García and Paloma López in Mexico City in 2013. It has since grown to include members Emmanuel Anguiano, Alfredo Lozano, Felipe Rebolledo, and Doreen Rios. Their research has focused on using sound and artificial intelligence to investigate the emergence of patterns—ranging from biosignals and the morphology of various living organisms—potentially offering a form of nonhuman

communication. With this in mind, they curated a collection of experimental research and educational tools called Ontological Machines. Their work is deeply influenced by the Latin American context, where conditions of precarity foster creative endeavors and traditional technologies intersect with cutting-edge production methods. Their current lines of research are shifting toward exploring “The hard problem of consciousness” and the close relationship between mind and matter, where magic appears to be fundamental. Sound remains their interface to the universe.

KITE (Dr. Suzanne Kite) is an Oglála Lakȟóta performance artist, visual artist, and composer raised in Southern California. Kite has a BFA from CalArts in music composition, MFA from Bard College’s Milton Avery Graduate School, and PhD in Fine Arts from Concordia University. Kite’s scholarship and practice investigate contemporary Lakota ontologies through research, computational media, and performance, often in collaboration with family and community members. Recently, Kite has been developing body interfaces for machine learning driven performance, sculptures generated by dreams, and experimental sound and video work. Kite’s award-winning article “Making Kin with Machines” (co-authored with Jason Edward Lewis, Noelani Arista, and Archer Pechawis) was published in The Journal of Design and Science (MIT Press). Kite is a 2023 Creative Capital Award Winner, 2023 USA Fellow, and a 2022 Creative Time Open Call artist with Alisha B. Wormsley. Kite is a distinguished Artist in Residence and Assistant Professor of American and Indigenous Studies, Bard College, and a Research Associate and Residency Coordinator for the Abundant Intelligences (Indigenous AI) project.

JASON EDWARD LEWIS is a digital media theorist, poet, and software designer. He founded Obx Laboratory for Experimental Media, where he conducts research/creation projects exploring computation as a creative and cultural material. Lewis is committed to

developing new forms of expression by working on conceptual, critical, creative, and technical levels simultaneously. He is University Research Chair in Computational Media and the Indigenous Future Imaginary as well as Professor of Computation Arts at Concordia University. Lewis co-directs Abundant Intelligences, the Indigenous Futures Research Centre, the Aboriginal Territories in Cyberspace research network, and the Skins Workshops on Aboriginal Storytelling and Video Game Design. His work has been shown in solo shows, group exhibitions, and festivals on four continents; and has numerous international awards. Lewis is lead co-author of the award-winning essay “Making Kin with the Machines” and editor of the groundbreaking Indigenous Protocol and Artificial Intelligence Position Paper.

DANIELA LIEJA QUINTANAR is Chief Curator and Deputy Director of Programs at the Roy and Edna Disney CalArts Theater (REDCAT). From 2016 to 2022, Lieja Quintanar served as Chief Curator and Director of Programming at Los Angeles Contemporary Exhibitions (LACE). Her curatorial practice takes inspiration from collective life, spaces of political struggle, and communal forms of knowledge production. For the 2024 Getty Initiative, PST Art, she curated Beatriz da Costa: (un)disciplinary tactics exhibition for LACE and All Watched Over by Machines of Loving Grace for REDCAT. She was awarded an Andy Warhol Foundation Curatorial Research Fellowship. She is part of the Los Angeles Tenants Union, collaborating with the East Side Local/ Union de Vecinos (2015–ongoing).

MUXX is a collective of four independent cells that intersect into a whole, conformed by Lukas Avendaño (performance artist, choreographer, and “Muxe”); EYIBRA (formerly known as Abraham Brody; composer, performer, and multimedia artist); Óldo Erréve (digital artist); and NNUX (composer, producer, and sound artist). MUXX has presented its performances at LACMA and currently has a solo exhibition at Laboratorio Arte Alameda, Mexico City.

CHARMAINE POH is an artist from Singapore working through media, moving image, and performance to peel apart, interrogate, and hold ideas of agency, repair, and the body across worlds. She aligns herself with strategies of visibility, opacity, deviance, and futurity. Her practice initially focused on using experimental documentary photography to explore how femininity and queerness are articulated. In 2021, she embarked on THE YOUNG BODY UNIVERSE, a series of techno-feminist enactments drawing from her experience as a child actor in the 2000s. Her work centers the effects of vulnerability, desire, and intimacy from the vantage point of subversion. She has exhibited at the Singapore Art Museum, Seoul Museum of Art, Leslie Lohman Museum, and the 60th Venice Biennale’s Foreigners Everywhere, among others. In 2019, she was one of Forbes Asia’s 30 under 30 in the arts. Based between Berlin and Singapore, she is a co-founder of the magazine Jom and member of the collective Asian Feminist Studio for Art and Research (AFSAR).

CHICHARRÓN, aka TIM REYES, is a musician and performance artist, as well as a classically trained percussionist. His work delves into Mexican American identity, masculinity, and survival, drawing from his upbringing in East LA during the 1990s and early 2000s. His art examines the tensions between Catholicism, born-again Christianity, and sex positivity, focusing on hyper-masculinity, submission and sex work for both survival and pleasure. As a recipient of the Creatives Rebuild New York grant, Chicharrón’s latest record, God Shaped Hole, examines indulgence in sex without shame. He recently have a multimedia exhibition titled Self Esteem Room, at All Street, NYC.

JOÃO RIBAS is Steven D. Lavine Executive Director and Vice President for Cultural Partnerships at the Roy and Edna Disney CalArts Theater (REDCAT). He was previously director of the Serralves Museum of Contemporary Art, Porto, where he also held the position of deputy director and senior curator (2014–18);

curator of the MIT List Visual Arts Center (2009–13); and curator, The Drawing Center, New York (2007–09). He is the winner of four Association Internationale des Critiques d’Art (AICA) awards (2008–11) and of the Emily Hall Tremaine Award (2010). His writing has been published in numerous publications, monographs, and magazines, and he has taught at Yale University, the Rhode Island School of Design, and the School of Visual Arts, New York, among other institutions. He was the curator of the 4th Ural Biennial in 2007, and the Portuguese Pavilion at the 58th Venice Biennale in 2019.

SARAH ROSALENA (Wixárika) is an interdisciplinary artist working between traditional craft techniques and emerging technology. She is Assistant Professor of Art in Computational Craft at UC Santa Barbara. She was recently awarded the Artadia Award; Creative Capital Award; the LACMA Art + Tech Lab Grant; the Steve Wilson Award from Leonardo, the International Society for Art, Sciences, and Technology; and the Carolyn Glasoe Bailey Art Prize. She has had solo exhibitions with the Columbus Museum of Art, LACMA Art + Tech Lab, the Museum of Contemporary Art Santa Barbara, Clockshop, and Blum & Poe Gallery. Her work is in the permanent collection at LACMA, the Columbus Museum of Art, and the Raclin Murphy Museum of Art.

KIRA XONORIKA is an artist, writer, researcher, and futurist whose work explores the connections between technoscience, sovereignty, worldbuilding, and magic. Her writing has been published in e-flux and Momus, among others. Recent exhibitions include What Models Make Worlds: Critical Imaginaries of AI at the Ford Foundation Gallery and Small V01ce at Honor Fraser. In 2024, she will spearhead Future Memory Lab, the first GenAI residency in South America.

Credits and Acknowledgments

All Watched Over by Machines of Loving Grace

PROJECT DIRECTOR João Ribas

EXHIBITION CURATOR Daniela Lieja Quintanar

ASSISTANT CURATOR Talia Heiman

PERFORMANCE PROGRAM CURATORS Katy Dammers, Daniela Lieja Quintanar, and Edgar Miramontes with Talia Heiman

TECHNICAL DIRECTOR Adam Matthew-McMillen

DEPUTY DIRECTOR, FINANCE AND OPERATIONS Allison Keating

ASSOCIATE TECHNICAL DIRECTORS Chu-Hsuan Chang and Lucio Maramba

FACILITIES AND PRODUCTION MANAGER Jacques Boudreau

BOX OFFICE AND VISITOR SERVICES MANAGER Brent Charles

FRONT OF HOUSE MANAGER Naomi Oppenheim

ADMINISTRATIVE MANAGER Rolando Rodriguez

EXHIBITION DESIGNER Adalberto Charvel

GRAPHIC DESIGNER Ella Gold

GETTY MARROW CURATORIAL INTERN Corey Solorio LoDuca

The Shadow Whose Prey the Hunter Becomes Back to Back Theatre

AUTHORS Mark Deans, Michael Chan, Bruce Gladwin, Simon Laherty, Sarah Mainwaring, Scott Price, Sonia Teuben

DIRECTOR Bruce Gladwin

PERFORMERS Simon Laherty, Sarah Mainwaring, Scott Price

COMPOSITION Luke Howard Trio—Daniel Farrugia, Luke Howard, Jonathon Zion

SOUND DESIGN Lachlan Carrick

LIGHTING DESIGN Andrew Livingston, bluebottle

SCREEN DESIGN Rhian Hinkley, lowercase

COSTUME DESIGN Shio Otani

AI VOICE OVER Belinda McClory

SCRIPT CONSULTANT Melissa Reeves

CREATIVE DEVELOPMENT Mark Cuthbertson, Rhian Hinkley, Pippin Latham, Andrew

Livingston, Victoria Marshall, Brian Tilley

STAGE MANAGER Alana Hoggart 260

SOUND ENGINEER Peter Monks

COMPANY MANAGER Erin Watson

PRODUCTION MANAGER Bao Ngouansavanh

HEAD OF ARTISTIC PLANNING Tanya Bennett

EXECUTIVE PRODUCER Tim Stitz

The Body is the Interface: Kite and Interspecifics

LAND ACKNOWLEDGEMENT Virginia Carmelo, Tongva Elder and Knowledge Keeper

WIČHÁȞPI WÓIHAŊBLEYA (DREAMLIKE STAR) Kite

TECHNOLOGIST Sean Hellfritsch

CHOREOGRAPHER Olivia Camfield

MANAGER Wíhaŋble S’a (Dreamer) Center for Indigenous AI: Emily Shaw

COORDINATOR Wíhaŋble S’a (Dreamer) Center for Indigenous AI: Rebecca Cosenza

META SINCRONÍA 1.0 Interspecifics (Founders Leslie Garcia, Paloma Lopez. Members: Emmanuel Anguiano, Alfredo Lozano, Felipe Rebolledo)

PARTICIPANTS Adriana Widdoes and baby Rio Strain, Audrey Medrano, Jessica Fuquay, Kio Gutierrez, Carmen Argote

Live Night: Cruising Bodies, Spirits, and Machines

CURATORS Edgar Miramontes, CAP UCLA Executive and Artistic Director and Daniela Lieja Quintanar, REDCAT Chief Curator and Deputy Director of Programs

ASSISTANT CURATOR Talia Heiman, REDCAT

PROJECT DIRECTOR João Ribas, REDCAT

DEPUTY DIRECTOR AND PROGRAM MANAGER Fred Frumberg, CAP UCLA

DIRECTOR OF EDUCATION AND SPECIAL INITIATIVES Meryl Friedman, CAP UCLA

PRODUCTION MANAGER Bozkurt “Bozzy” Karasu, CAP UCLA

ARTIST LIAISON MANAGER Zarina Rico, CAP UCLA

ASSOCIATE TECHNICAL DIRECTOR/AUDIO & VIDEO Duncan Woodbury, CAP UCLA

DIRECTOR/LIGHTING & STAGE Katelan Braymer, CAP UCLA

COMMUNITY PROJECTS AND OPERATIONS MANAGERS Mads Falcone, CAP UCLA

ASSISTANT TO THE EXECUTIVE AND ARTISTIC DIRECTOR Emily Davis, CAP UCLA

Prometheus Firebringer

WRITER/DIRECTOR/PERFORMER Annie Dorsen

SOUND DESIGN Ian Douglas-Moore

VIDEO AND SYSTEMS DESIGN Ryan Holsopple

LIGHTING DESIGN/TECHNICAL DIRECTION Ruth Waldeyer

SOFTWARE DESIGN AND PROGRAMMING Sukanya Aneja

VOICE PRINTS Okwui Okpokwasili, Livia Reiner

3D ARTIST Harry Kleeman

DRAMATURGY Tom Sellar

PRODUCER Natasha Katerinopoulos

We want to thank the art community dedicated to the study and exploration of new technologies, especially to artist Nancy Baker Cahill and curator Jesse Damiani for their support during our curatorial research. Thank you to Kamal Sinclair, Curator of the Digital Innovation Initiative at the Music Center for the commission of ԲԱԺԱԿ ՆԱՅՈՂ (One Who Looks at the Cup) (2024) by Mashinka Hakopian Firunts. Thank you to DOT DOT Studio for the support on Stephanie Dinkins’s and Mashinka Hakopian’s projects. We are so grateful for the collaboration with Dr. Pete Chandrangsu and students of his Fall 2024 course “Microbes x Art” at Scripps College.

Thank you to lead preparator Chris Wormald and preparators Calvin Lee, Kyle Frankowski, and Ian Gaetz. We are grateful for the collaboration with Laboratorio de Arte Alameda and the support of Lucia Sanromán for MUXX’s performance. And finally, thank you to each member of the team of the Getty Foundation.

Special thanks to The Andy Warhol Foundation for their partial support of the exhibition All Watched Over by Machines of Loving Grace.

CALIFORNIA INSTITUTE OF THE ARTS (CALARTS) has set the pace for educating professional artists since 1970. Offering rigorous undergraduate and graduate degree programs through six schools—Art, Critical Studies, Dance, Film/Video, Music, and Theater—CalArts champions creative excellence, critical reflection, and the development of new forms and expressions. As successive generations of faculty and alumni have helped shape the landscape of contemporary arts, the Institute first envisioned by Walt Disney encompasses a vibrant, eclectic community with global reach, inviting experimentation, independent inquiry, and active collaboration and exchange among artists, artistic disciplines, and cultural traditions.

THE ROY AND EDNA DISNEY CALARTS THEATER (REDCAT) is a multidisciplinary center for innovative visual and performing art founded by the California Institute of the Arts (CalArts) in the Walt Disney Concert Hall complex in downtown Los Angeles. Through performances, exhibitions, screenings, and literary events, REDCAT introduces diverse audiences, students, and artists to the most influential developments in the arts from around the world, and gives artists in this region the creative support they need to achieve national and international stature. REDCAT continues the tradition of CalArts by encouraging experimentation, discovery, and lively civic discourse.

EAST OF BORNEO (EOB) is CalArts’ online magazine of contemporary art and its history as considered from Los Angeles. We publish new essays and interviews alongside a multimedia archive curated in collaboration with our readers. East of Borneo Books sees the extension of our mission into print, calling attention to a diverse range of critical writing on the visual culture of Los Angeles. We also present exhibitions, artist talks, screenings, and workshops, and we republish material that is out of print or hard to find through our Second Life series. East of Borneo is supported by the Art School at CalArts.

All Watched Over by Machines of Loving Grace 2025

ISBN 978-0-9971997-4-1

This publication accompanies All Watched Over by Machines of Loving Grace, an exhibition, screening, and performance program initiated as part of PST ART: Art & Science Collide and presented at the Roy and Edna Disney CalArts Theater (REDCAT), September 12, 2024–February 23, 2025.

Published by California Institute of the Arts

EDITED BY Daniela Lieja Quintanar with Talia Heiman

ASSOCIATE EDITOR Adriana Widdoes

EDITORIAL ASSISTANCE Corey Solorio LoDuca

COPYEDITOR Poppy Coles

DESIGN Ella Gold

PRINTED BY Nocaut LLC, Edition of 500

© 2025 by California Institute of the Arts

All rights reserved. No part of this book may be reproduced without permission in writing by the publisher. All images are copyright of the artist and used with permission. Copyright for all texts is held by the authors and used with permission.

All Watched Over by Machines of Loving Grace addresses one of the most pressing issues of our time—the impact of artificial intelligence—by proposing alternative directions for its future. This publication accompanies the exhibition and performances presented at the Roy and Edna Disney CalArts Theater (REDCAT) as part of Getty’s PST ART: Art & Science Collide, combining contributions by the participating artists with invited contributors to inform the next generation of AI.

Edited by

Daniela Lieja Quintanar with Talia Heiman

Texts by

Nora Al-Badri

Minne Atairu

Stephanie Dinkins

Mashinka Firunts Hakopian

Interspecifics

Kite, Scott Benesiinaabandan and Jason Edward Lewis

Charmaine Poh

Sarah Rosalena

João Ribas

Kira Xonorika

Back to Back Theatre

Annie Dorsen

Turn static files into dynamic content formats.

Create a flipbook