The Project

Inspired by the canonical “Software Studies\a Lexicon” this ongoing project aims to create a ‘Lexicon of Generative AI’ to capture and share experiences of Generative AI as it is understood, applied, critiqued, and incorporated into creative practices. 

Generative AI has brought new terms such as ‘Prompt Engineering’, ‘Latent Space’ and ‘Generative Adversarial Networks’. Often with roots in Computer Science and Software Engineering vocabularies, in use they sit between technical and creative languages. They are often as esoteric as they are explanatory. Part marketing jargon and part magical language.  

A comprehensive set of definitions would be impossible and far less interesting and useful than understanding the myriad ways in which the emerging terms, concepts and processes of Generative AI are changing the creative landscape. This lexicon is particularly interested in the role of creative practice in critiquing as well as normalising emerging Generative AI technologies.

The speed with which phrases such as ‘text-to-image’ and ‘Neural Network’ have entered everyday use hints at an accelerated normalisation.  We hope that this lexicon will provide a space to examine some of these concepts more closely, and for questioning some of the assumptions that underpin them.

Rather than ‘definitions’, this lexicon aims to provide understandings and perspectives on the key concepts of Generative AI practices by gathering reflections, critical responses and contextualisation through practice. Contributions should aim to be concise, providing an essential understanding of the chosen subject, but also present a particular perspective or position in relation to creative practice and the cultural landscape. 

Those interested should submit texts of between 800 and 2000 words, optionally including up to 5 images (.jpg, .png, .svg, or .tiff format) with the smallest dimension at least 1000 pixels. Possible topics include:    

Autonomous Art Systems, Artistic Intent, Randomness, Large Language Models, Text-to-Image Synthesis, Image-to-Text Captioning, Text-to-Speech Synthesis, Multi-Modal Generative Models, Zero-shot Learning, Latent Space, Transformer Architectures, Attention Mechanisms, Autoencoders, Generative Adversarial Networks (GANs), Neural Networks, Style Transfer, Diffusion, and Deep Belief Networks.