After generating tens of thousands of images through AI, I began to question whether the residual material from these processes could serve as the basis for a new artwork. To explore this idea, I collaborated with an AI to write a program that randomly selects filenames from an archive of these previously generated works. Each file encapsulates the settings used in the creation process, including the random seed—a key component in generative AI like Stable Diffusion.
These filenames, which provide a glimpse into the mechanics of image generation, are projected onto a screen in real-time. Simultaneously, they are transformed into speech through a generative text-to-speech system, Coqui TTS. No further processing was applied to the audio, preserving the raw output of the AI, including the stuttering of Coqui TTS when voicing numbers, which adds a distinct auditory texture to the experience.
This layering of projection and voice gives new life to the residual AI material, making it both seen and heard. By repurposing this discarded data, the work questions not only the value of AI-generated content but also what qualifies as residual material and its potential meaning beyond its original context.
The program itself was developed in Python in collaboration with ChatGPT and Stable Diffusion for the original image-based works. A 10-minute excerpt, captured as a screen recording, showcases the dynamic interplay between visual and auditory elements in this new context.