{"id":218,"date":"2024-11-04T21:14:40","date_gmt":"2024-11-04T21:14:40","guid":{"rendered":"https:\/\/lexiconia.art\/?page_id=218"},"modified":"2025-09-23T05:46:32","modified_gmt":"2025-09-23T05:46:32","slug":"spaces-liminal-latent","status":"publish","type":"page","link":"https:\/\/lexiconia.art\/?page_id=218","title":{"rendered":"Spaces Liminal &amp; Latent"},"content":{"rendered":"\n<div class=\"wp-block-stackable-text stk-block-text stk-block stk-7b15f6d\" data-block-id=\"7b15f6d\"><p class=\"stk-block-text__text\">Chad Eby<\/p><\/div>\n\n\n\n<p class=\"has-text-align-right has-palette-color-2-color has-text-color has-link-color wp-elements-6c69a6496420e94efcf8eb229dbb4f44\"><em>Latent Space<\/em><\/p>\n\n\n\n<div class=\"wp-block-stackable-image stk-block-image stk-block stk-6c0cc81 is-style-default\" data-block-id=\"6c0cc81\"><style>.stk-6c0cc81 .stk-img-figcaption{font-style:italic !important;letter-spacing:-0.2px !important;}.stk-6c0cc81 .stk-img-wrapper img{object-position:47% 18% !important;object-fit:contain !important;}:where(.stk-hover-parent:hover,  .stk-hover-parent.stk--is-hovered) .stk-6c0cc81 .stk-img-wrapper::after{background-color:#000000B3 !important;}<\/style><figure><span class=\"stk-img-wrapper stk-image--shape-stretch\"><img loading=\"lazy\" decoding=\"async\" class=\"stk-img wp-image-258\" src=\"https:\/\/lexiconia.art\/wp-content\/uploads\/2024\/12\/8k_realistic_medium_format_photo_backrooms_liminal_spaces_shallow_depth_of_field-23c18199-4dc1-4ae2-b5e3-7a16fb8fa0d6.webp\" width=\"896\" height=\"512\" alt=\"Figure 1: Screenshot of the HUMBABA poem \u2018Backlash Gorge\u2019 - strophe I, showing activated peritext.\nHUMBABA algorithm output is on the right, authored input is on the left.\n\" srcset=\"https:\/\/lexiconia.art\/wp-content\/uploads\/2024\/12\/8k_realistic_medium_format_photo_backrooms_liminal_spaces_shallow_depth_of_field-23c18199-4dc1-4ae2-b5e3-7a16fb8fa0d6.webp 896w, https:\/\/lexiconia.art\/wp-content\/uploads\/2024\/12\/8k_realistic_medium_format_photo_backrooms_liminal_spaces_shallow_depth_of_field-23c18199-4dc1-4ae2-b5e3-7a16fb8fa0d6-300x171.webp 300w, https:\/\/lexiconia.art\/wp-content\/uploads\/2024\/12\/8k_realistic_medium_format_photo_backrooms_liminal_spaces_shallow_depth_of_field-23c18199-4dc1-4ae2-b5e3-7a16fb8fa0d6-768x439.webp 768w\" sizes=\"auto, (max-width: 896px) 100vw, 896px\" \/><\/span><figcaption class=\"stk-img-figcaption\">Stable Diffusion FLUX image of an empty hospital corridor; a liminal space from latent space<\/figcaption><\/figure><\/div>\n\n\n\n<div class=\"wp-block-stackable-heading stk-block-heading stk-block-heading--v2 stk-block stk-43dc8bd\" id=\"strong-latent-space-strong\" data-block-id=\"43dc8bd\"><style>.stk-43dc8bd {margin-bottom:8px !important;}<\/style><h4 class=\"stk-block-heading__text\"><strong>Latent Space<\/strong><\/h4><\/div>\n\n\n\n<p><em>Latent space,<\/em> as understood in generative artificial intelligence work, refers to a high-dimensional mathematical construct that serves as a space of transformation between inputs of human intent and unruly\u2014sometimes uncanny\u2014generative output. Because of the particular way that latent spaces have developed as a space- and computation-saving abstraction, they are largely opaque from the outside; each one a black box but brimming inside with a riotous multitude of unrealized potentialities.<\/p>\n\n\n\n<p>Unlike the step-by-step logic of traditional imperative computer programs, generative AI models that employ latent space remain largely inscrutable, existing as complex, mathematical abstractions that defy un-augmented human comprehension. The imaginary of latent space stands in as an undiscovered country; a largely impenetrable territory known only from the fragmentary texts, images, and sounds smuggled out through computationally intensive decoding or from arduous and incomplete nascent attempts at analysis and mapping.<\/p>\n\n\n\n<p>Specifically for image generation, a latent space will contain data abstracted from digitized images paired with matching text descriptions. Enormous beyond human imagination, each latent space is like a secret garden overgrown with vector encodings gathered from the probabilistic patterns of literal billions of text-image pairs. The latent space is where an encoded prompt may be transformed into a particular image out of the myriad possible ones<sup data-fn=\"6a84ec9b-32a6-4f8c-b2fe-5d76d6fd6142\" class=\"fn\"><a href=\"#6a84ec9b-32a6-4f8c-b2fe-5d76d6fd6142\" id=\"6a84ec9b-32a6-4f8c-b2fe-5d76d6fd6142-link\">1<\/a><\/sup>.<\/p>\n\n\n\n<p>The architecture of latent spaces varies across different types of generative techniques. Variational Autoencoders (VAEs) explicitly construct their latent space as a probabilistic distribution, typically a multi-dimensional Gaussian, allowing them to generate novel outputs by sampling from different regions. In contrast, Generative Adversarial Networks (GANs) implicitly learn a latent space through an antagonistic interplay between generator and discriminator networks. While often less structured than VAE latent spaces, the dance of GANs can nonetheless produce high-fidelity output. Diffusion models also employ autoencoders, but add a probabilistic denoising method, effectively traversing a path through latent space from pure noise to a coherent image. This more structured iterative process allows diffusion models to generate believable images while maintaining some degree of interpretability of their latent representations, offering the possibility to steer image generation modestly toward preferred outcomes.<sup data-fn=\"79c1c951-64c5-422c-96db-e5179881f9ea\" class=\"fn\"><a href=\"#79c1c951-64c5-422c-96db-e5179881f9ea\" id=\"79c1c951-64c5-422c-96db-e5179881f9ea-link\">2<\/a><\/sup><\/p>\n\n\n\n<p>Efforts to visualize and interpret latent spaces have become a significant area of research in the field of AI interpretability. Techniques like t-SNE and UMAP allow for the projection of high-dimensional latent spaces into two or three dimensions, providing a look into the organization of learned features<sup data-fn=\"766997ab-789e-4dab-be02-131f7a197d71\" class=\"fn\"><a href=\"#766997ab-789e-4dab-be02-131f7a197d71\" id=\"766997ab-789e-4dab-be02-131f7a197d71-link\">3<\/a><\/sup>. These visualizations are necessarily imperfect, offering only small glimpses into the complex relationships encoded within a latent space. Latent space interpolation has become a powerful tool for exploring the generative capabilities of these models. By smoothly transitioning between different vector coordinates in latent space, we can observe how a model&#8217;s encoding of features and concepts evolves. This technique has been used to create mesmerizing visual effects, such as morphing between different faces or objects, and, increasingly, to produce coherent video clips.<\/p>\n\n\n\n<div class=\"wp-block-stackable-heading stk-block-heading stk-block-heading--v2 stk-block stk-5ba929d\" id=\"strong-liminal-space-strong\" data-block-id=\"5ba929d\"><style>.stk-5ba929d {margin-bottom:8px !important;}<\/style><h4 class=\"stk-block-heading__text\"><strong>Liminal Space<\/strong><\/h4><\/div>\n\n\n\n<p><em>Liminal space,<\/em> like latent space, is similarly understood as a space of transition, ambiguity, and transformation. The adjective, &#8220;liminal,&#8221; is derived from a Greek\/Latin word that referred to literal doorway thresholds. In the nineteenth century, the word was used by psychologists to indicate a lower limit\u2014a metaphorical threshold beyond which a sensation is too faint to be perceived.<sup data-fn=\"99a229ce-045f-41fb-8c65-c4f8f0d6808f\" class=\"fn\"><a href=\"#99a229ce-045f-41fb-8c65-c4f8f0d6808f\" id=\"99a229ce-045f-41fb-8c65-c4f8f0d6808f-link\">4<\/a><\/sup> Only later did the word come to mean a general transitional or intermediate state, and by coupling \u201climinal\u201d to \u201cspace,\u201d current usage restores a measure of the original term\u2019s architectural connections.<\/p>\n\n\n\n<p>In liminal spaces we encounter the unsettling in-betweenness of not being quite where we were, but also not quite having arrived at where we intended to be. It is in this sense that liminal spaces are associated with the middle stage of a rite of passage; they encompass a disorienting zone of uncertainty.<sup data-fn=\"5ef85ec6-06ba-40cf-9b1c-6cfdbc4202a5\" class=\"fn\"><a href=\"#5ef85ec6-06ba-40cf-9b1c-6cfdbc4202a5\" id=\"5ef85ec6-06ba-40cf-9b1c-6cfdbc4202a5-link\">5<\/a><\/sup><\/p>\n\n\n\n<p>Latent and liminal spaces conceptually overlap as being zonic or spatial in structure, but more importantly they share a similar functional role as a place of ambiguity, potentiality, transition, translation, and transformation. In addition to the structural and functional similarities, latent space may be argued to share an additional characteristic of liminal space: it is also haunted. The spectral turn toward &#8220;hauntology&#8221; in the Derridean sense acts as a vehicle to further explore the intersections of liminal and latent space and may offer new ways to think about interacting with generative AI.<sup data-fn=\"1c7d2db4-f699-483d-8ce9-e818860e69f7\" class=\"fn\"><a href=\"#1c7d2db4-f699-483d-8ce9-e818860e69f7\" id=\"1c7d2db4-f699-483d-8ce9-e818860e69f7-link\">6<\/a><\/sup><\/p>\n\n\n\n<div class=\"wp-block-stackable-heading stk-block-heading stk-block-heading--v2 stk-block stk-a1c2d72\" id=\"strong-the-spectral-turn-strong\" data-block-id=\"a1c2d72\"><style>.stk-a1c2d72 {margin-bottom:8px !important;}<\/style><h4 class=\"stk-block-heading__text\"><strong>The Spectral Turn<\/strong><\/h4><\/div>\n\n\n\n<p>The generative capacities of modern AI systems arise from their ability to navigate the liminal, in-between corridors of latent space; the high-dimensional vector fields, where the essential features and patterns of the training data are distilled, organized, and interrelated in complex, convoluted ways. In these not-quite-spaces, generative models begin to channel the spectral influences of their pasts, transmuting them into startlingly original and sometimes deeply uncanny new forms.<\/p>\n\n\n\n<p>Latent space models, called \u201ccheckpoints\u201d in Stable Diffusion, are like vast graveyards filled with the encoded bodies of images paired with their textual epitaphs. It is this sense of the latent space as a site of digital necromancy\u2014where the ghosts of the past are summoned to birth strange new forms. Projects like Deep Dream, which leverage the latent representations of convolutional neural networks to generate hallucinatory imagery of slug-dogs and orientalist pagodas are emblematic of this early generative image aesthetic. The uncanny qualities of these outputs speak to a deeper unease with the machine&#8217;s ability to channel the spectral influences of its training data, conjuring visions that exist in a liminal zone between the familiar and the alien.<sup data-fn=\"4f1d3f2a-fb97-4ca7-9f4b-a4b96c9b4390\" class=\"fn\"><a href=\"#4f1d3f2a-fb97-4ca7-9f4b-a4b96c9b4390\" id=\"4f1d3f2a-fb97-4ca7-9f4b-a4b96c9b4390-link\">7<\/a><\/sup><\/p>\n\n\n\n<p>The hauntological dimension of latent space speaks to an ontological unease that permeates our relationship with generative AI technologies. Just as liminal spaces complicate place, stability, and identity, the latent spaces of AI systems act to challenge some of our most basic understandings of creativity, authorship, collaboration, agency\u2014and even representation itself. The ghostly presences that haunt the outputs of these models\u2014the echoes of data past, the long traces of encoded biases, and the virtual potentialities of billions of unrealized futures\u2014nudge us to consider the limits of our own agency and the spectral influences that increasingly modulate our technologically-mediated making.<\/p>\n\n\n\n<div class=\"wp-block-stackable-heading stk-block-heading stk-block-heading--v2 stk-block stk-bb4cacd\" id=\"strong-semantic-probes-strong\" data-block-id=\"bb4cacd\"><style>.stk-bb4cacd {margin-bottom:8px !important;}<\/style><h4 class=\"stk-block-heading__text\"><strong>Semantic Probes<\/strong><\/h4><\/div>\n\n\n\n<p>Unlike more engineering-oriented projects intended to map the contours of latent space in a direct way, my <em>Alphamerics<\/em> project explores latent spaces as a series of images generated using Stable Diffusion models. The images are generated by sending near-empty prompts (single uppercase alphabetic characters from A-Z and numerals 0-9), all sharing a single, seed as \u201cprobes\u201d into a latent space to see what may come back. A full run of these prompts returns a grid of 36 images that often share strong visual similarities in terms of composition and color palette, because the seed value sets initial conditions for distribution of noise, but at the same time may vary wildly in subject matter and theme from image to image in the same seed series. Nearly all of the images feel a little haunted.<\/p>\n\n\n\n<div class=\"wp-block-stackable-image stk-block-image stk-block stk-80854e2 is-style-default\" data-block-id=\"80854e2\"><style>.stk-80854e2 .stk-img-figcaption{font-style:italic !important;letter-spacing:-0.2px !important;}.stk-80854e2 .stk-img-wrapper img{object-position:47% 18% !important;object-fit:contain !important;}:where(.stk-hover-parent:hover,  .stk-hover-parent.stk--is-hovered) .stk-80854e2 .stk-img-wrapper::after{background-color:#000000B3 !important;}<\/style><figure><span class=\"stk-img-wrapper stk-image--shape-stretch\"><img loading=\"lazy\" decoding=\"async\" class=\"stk-img wp-image-268\" src=\"https:\/\/lexiconia.art\/wp-content\/uploads\/2024\/12\/06LA.jpg\" width=\"1240\" height=\"1240\" alt=\"Figure 1: Screenshot of the HUMBABA poem \u2018Backlash Gorge\u2019 - strophe I, showing activated peritext.\nHUMBABA algorithm output is on the right, authored input is on the left.\n\" srcset=\"https:\/\/lexiconia.art\/wp-content\/uploads\/2024\/12\/06LA.jpg 1240w, https:\/\/lexiconia.art\/wp-content\/uploads\/2024\/12\/06LA-300x300.jpg 300w, https:\/\/lexiconia.art\/wp-content\/uploads\/2024\/12\/06LA-1024x1024.jpg 1024w, https:\/\/lexiconia.art\/wp-content\/uploads\/2024\/12\/06LA-150x150.jpg 150w, https:\/\/lexiconia.art\/wp-content\/uploads\/2024\/12\/06LA-768x768.jpg 768w\" sizes=\"auto, (max-width: 1240px) 100vw, 1240px\" \/><\/span><figcaption class=\"stk-img-figcaption\">Figure 1: An &#8220;alphameric&#8221; grid for the seed 4096<\/figcaption><\/figure><\/div>\n\n\n\n<p>While a completely empty prompt is the most neutral vehicle to probe latent spaces, the inevitable drawback is that there is only one empty prompt per seed, and since the seed determines initial conditions, it isn\u2019t possible to \u201cwalk the seed\u201d by prompting. This led me to send single characters as prompts as the next best solution, since I suspected that they may be tokenized in such a way as to fall beneath the threshold of semantic content. Recent investigations I\u2019ve made with a wider range of checkpoints and encoders (especially with the new Flux models) point to there being more semantic content in single character prompts generally than I first believed, and, specifically, significantly more in some single character prompts than others; \u201cQ\u201d and \u201cX\u201d seem to be particularly potent signifiers, perhaps related to their relative infrequent use in English words.<\/p>\n\n\n\n<div class=\"wp-block-stackable-image stk-block-image stk-block stk-e754d63 is-style-default\" data-block-id=\"e754d63\"><style>.stk-e754d63 .stk-img-figcaption{font-style:italic !important;letter-spacing:-0.2px !important;}.stk-e754d63 .stk-img-wrapper img{object-position:47% 18% !important;object-fit:contain !important;}:where(.stk-hover-parent:hover,  .stk-hover-parent.stk--is-hovered) .stk-e754d63 .stk-img-wrapper::after{background-color:#000000B3 !important;}<\/style><figure><span class=\"stk-img-wrapper stk-image--shape-stretch\"><img loading=\"lazy\" decoding=\"async\" class=\"stk-img wp-image-269\" src=\"https:\/\/lexiconia.art\/wp-content\/uploads\/2024\/12\/07LA.jpg\" width=\"1240\" height=\"1240\" alt=\"Figure 1: Screenshot of the HUMBABA poem \u2018Backlash Gorge\u2019 - strophe I, showing activated peritext.\nHUMBABA algorithm output is on the right, authored input is on the left.\n\" srcset=\"https:\/\/lexiconia.art\/wp-content\/uploads\/2024\/12\/07LA.jpg 1240w, https:\/\/lexiconia.art\/wp-content\/uploads\/2024\/12\/07LA-300x300.jpg 300w, https:\/\/lexiconia.art\/wp-content\/uploads\/2024\/12\/07LA-1024x1024.jpg 1024w, https:\/\/lexiconia.art\/wp-content\/uploads\/2024\/12\/07LA-150x150.jpg 150w, https:\/\/lexiconia.art\/wp-content\/uploads\/2024\/12\/07LA-768x768.jpg 768w\" sizes=\"auto, (max-width: 1240px) 100vw, 1240px\" \/><\/span><figcaption class=\"stk-img-figcaption\">Figure 2: An &#8220;alphameric&#8221; grid for the seed 32768<\/figcaption><\/figure><\/div>\n\n\n\n<p>These approaches have revealed some useful insights into how generative models encode and associate information, but they also highlight the vast gulf between human-interpretable concepts and the abstract, high-dimensional representations learned by neural networks.<\/p>\n\n\n\n<div class=\"wp-block-stackable-heading stk-block-heading stk-block-heading--v2 stk-block stk-d449d94\" id=\"strong-large-language-models-strong\" data-block-id=\"d449d94\"><style>.stk-d449d94 {margin-bottom:8px !important;}<\/style><h4 class=\"stk-block-heading__text\"><strong>Large Language Models<\/strong><\/h4><\/div>\n\n\n\n<p>The spectral potentialities of generative AI extend beyond the visuals smuggled out of latent space. In the domain of natural language processing, Large Language Models (LLMs) like GPT-3 have demonstrated a remarkable capacity to generate human-like text, from creative fiction to persuasive essays. This fluency poses a haunting proposition: that a language model performs metaphorically as a medium, channeling the ghostly influences of its training corpus like a statistical Ouija board to produce novel combinations of words and ideas that feel disturbingly lifelike.<\/p>\n\n\n\n<p>The hauntological quality of large language models speaks to deeper questions of authorship and authenticity. As generative AI systems demonstrate an uncanny ability to mimic and recombine human modes of expression, the arguments for human exceptionalism in terms of individual creativity and originality may be called into question, not to mention the legitimacy and appropriateness of the use of generative AI itself. We find ourselves confronted with the spectral presence of the symbolic machine, a frothy, ghostly entity that can seemingly conduct the voices of the past to craft compelling new narratives\u2014a feat that challenges our understanding of what it means to be a writer, a thinker, or a creative individual in the 21st century.<\/p>\n\n\n\n<p>The implications of this hauntological dimension of the latent space extend beyond the realm of aesthetics and language, casting a long shadow over the practical applications of generative AI. In fields like scientific research, medical diagnostics, and financial forecasting, latent space representations are becoming essential tools for extracting meaningful insights from complex, high-dimensional data. Yet, the opacity of these latent spaces, and the spectral influences that nip and worry at their generative outputs, raise important questions about the reliability, transparency, and accountability of these systems.<\/p>\n\n\n\n<p>The ethical implications of latent space representations in AI systems are manifold. Issues of safety, bias and fairness in AI have already been widely discussed, but the latent space adds a new dimension to these concerns. The complex, distributed nature of these representations makes it challenging to identify and mitigate dangers or biases, as they may be encoded in subtle, non-linear ways across multiple dimensions of a latent space.<\/p>\n\n\n\n<p>As we entrust ever greater decision-making power to generative AI models, we should remain sensitive to the unsettling reality that the latent spaces at the heart of these technologies are haunted by the ghosts of their pasts. The biases, blind spots, virtual potentialities\/lost futures encoded within these representations have the power to profoundly shape the trajectories of our institutions, our policies, and our collective futures. The stakes here are high, and the need to develop a more nuanced understanding of the latent space and its spectral qualities is vital.<\/p>\n\n\n\n<div class=\"wp-block-stackable-heading stk-block-heading stk-block-heading--v2 stk-block stk-a85ac42\" id=\"strong-haunting-as-resistance-strong\" data-block-id=\"a85ac42\"><style>.stk-a85ac42 {margin-bottom:8px !important;}<\/style><h4 class=\"stk-block-heading__text\"><strong>Haunting as Resistance<\/strong><\/h4><\/div>\n\n\n\n<p>Due to the liminal nature of latent space, the unchained ghosts we summon there are radically polysemous; they offer to show us not only the traces of our past inequities and encodings of present dominant narratives, but through their perversely polymorphic tendency they point to representations of recombinant possibilities that differ from either\u2014not just lost futures, but the most fundamental raw material for preferred ones.<\/p>\n\n\n\n<p>In this way, latent space may be seen as Pandora\u2019s box; unleashing evil upon the world, but also containing hope (or, more cynically, \u201cdeceptive expectation\u201d) simply as a demonstration of the condition of possibility for something different than it is now; an endless well of disruption. The caveat to this claim is that the ghosts must be truly unchained\u2014the enormous costs of training generative AI models are often undertaken as investments by profit-seeking entities, who are then compelled to limit the inputs and outputs of such systems to avoid alienating advertisers or investors. This is a topic for a different paper, but I will say that as a site of resistance, open-source software under local control is essential.<\/p>\n\n\n\n<div class=\"wp-block-stackable-heading stk-block-heading stk-block-heading--v2 stk-block stk-233b213\" id=\"strong-conclusion-strong\" data-block-id=\"233b213\"><style>.stk-233b213 {margin-bottom:8px !important;}<\/style><h4 class=\"stk-block-heading__text\"><strong>Conclusion<\/strong><\/h4><\/div>\n\n\n\n<p>By embracing the hauntological dimensions of latent space, we may find new pathways for navigating the ontological uncertainties of the technological present we now inhabit. This confrontation with the spectral nature of latent spaces may require new modes of thinking and new frameworks for analysis. Rather than viewing these generative models as neutral, deterministic tools, we might more profitably understand them as complex, liminal entities\u2014repositories of virtual potentiality that are inextricably linked to the spectral influences of their training data and the broader socio-cultural and economic contexts in which they are embedded. By variously dispelling or partnering with the ghosts that haunt latent space, we may use these strange constructs as a fulcrum to lever into place visions of our preferred futures.<\/p>\n\n\n\n<div class=\"wp-block-stackable-heading stk-block-heading stk-block-heading--v2 stk-block stk-e9f5eb8\" id=\"strong-notes-strong\" data-block-id=\"e9f5eb8\"><style>.stk-e9f5eb8 {margin-bottom:8px !important;}<\/style><h4 class=\"stk-block-heading__text\"><strong>Notes<\/strong><\/h4><\/div>\n\n\n<ol class=\"wp-block-footnotes\"><li id=\"6a84ec9b-32a6-4f8c-b2fe-5d76d6fd6142\">Zhang, Chenshuang, Chaoning Zhang, Mengchun Zhang, and In So Kweon. &#8220;Text-to-image diffusion models in generative ai: A survey.&#8221;\u00a0<em>arXiv preprint arXiv:2303.07909<\/em>\u00a0(2023). <a href=\"#6a84ec9b-32a6-4f8c-b2fe-5d76d6fd6142-link\" aria-label=\"Jump to footnote reference 1\">\u21a9\ufe0e<\/a><\/li><li id=\"79c1c951-64c5-422c-96db-e5179881f9ea\">Kingma, Diederik P., and Max Welling. &#8220;An introduction to variational autoencoders.&#8221;\u00a0<em>Foundations and Trends in Machine Learning<\/em>\u00a012, no. 4 (2019): 307-392. <a href=\"#79c1c951-64c5-422c-96db-e5179881f9ea-link\" aria-label=\"Jump to footnote reference 2\">\u21a9\ufe0e<\/a><\/li><li id=\"766997ab-789e-4dab-be02-131f7a197d71\">Arora, Sanjeev, Wei Hu, and Pravesh K. Kothari. &#8220;An analysis of the t-sne algorithm for data visualization.&#8221; In\u00a0<em>Conference on learning theory<\/em>, pp. 1455-1462. PMLR, 2018. <a href=\"#766997ab-789e-4dab-be02-131f7a197d71-link\" aria-label=\"Jump to footnote reference 3\">\u21a9\ufe0e<\/a><\/li><li id=\"99a229ce-045f-41fb-8c65-c4f8f0d6808f\"><em>Oxford English Dictionary<\/em>, s.v. \u201climinal (adj.), sense 3,\u201d\u00a0July 2023,\u00a0https:\/\/doi.org\/10.1093\/OED\/6646919567. <a href=\"#99a229ce-045f-41fb-8c65-c4f8f0d6808f-link\" aria-label=\"Jump to footnote reference 4\">\u21a9\ufe0e<\/a><\/li><li id=\"5ef85ec6-06ba-40cf-9b1c-6cfdbc4202a5\">Andrews, Hazel, and Les Roberts, eds.\u00a0<em>Liminal Landscapes<\/em>. New York: Taylor &amp; Francis, 2012. <a href=\"#5ef85ec6-06ba-40cf-9b1c-6cfdbc4202a5-link\" aria-label=\"Jump to footnote reference 5\">\u21a9\ufe0e<\/a><\/li><li id=\"1c7d2db4-f699-483d-8ce9-e818860e69f7\">Derrida,\u00a0Jacques.\u00a0<em>Specters of Marx : the state of the debt, the work of mourning, and the New international<\/em>.\u00a0United Kingdom:\u00a0Routledge,\u00a01994. <a href=\"#1c7d2db4-f699-483d-8ce9-e818860e69f7-link\" aria-label=\"Jump to footnote reference 6\">\u21a9\ufe0e<\/a><\/li><li id=\"4f1d3f2a-fb97-4ca7-9f4b-a4b96c9b4390\">Mordvintsev, Alexander, Christopher Olah, and Mike Tyka. &#8220;Deepdream-a code example for visualizing neural networks.&#8221; Google Research 2, no. 5 (2015). <a href=\"#4f1d3f2a-fb97-4ca7-9f4b-a4b96c9b4390-link\" aria-label=\"Jump to footnote reference 7\">\u21a9\ufe0e<\/a><\/li><\/ol>\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Chad Eby Latent Space Stable Diffusion FLUX image of an empty hospital corridor; a liminal space from latent space Latent Space Latent space, as understood in generative artificial intelligence work, refers to a high-dimensional mathematical construct that serves as a space of transformation between inputs of human intent and unruly\u2014sometimes uncanny\u2014generative output. Because of the [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":"[{\"content\":\"Zhang, Chenshuang, Chaoning Zhang, Mengchun Zhang, and In So Kweon. \\\"Text-to-image diffusion models in generative ai: A survey.\\\"\u00a0<em>arXiv preprint arXiv:2303.07909<\/em>\u00a0(2023).\",\"id\":\"6a84ec9b-32a6-4f8c-b2fe-5d76d6fd6142\"},{\"content\":\"Kingma, Diederik P., and Max Welling. \\\"An introduction to variational autoencoders.\\\"\u00a0<em>Foundations and Trends in Machine Learning<\/em>\u00a012, no. 4 (2019): 307-392.\",\"id\":\"79c1c951-64c5-422c-96db-e5179881f9ea\"},{\"content\":\"Arora, Sanjeev, Wei Hu, and Pravesh K. Kothari. \\\"An analysis of the t-sne algorithm for data visualization.\\\" In\u00a0<em>Conference on learning theory<\/em>, pp. 1455-1462. PMLR, 2018.\",\"id\":\"766997ab-789e-4dab-be02-131f7a197d71\"},{\"content\":\"<em>Oxford English Dictionary<\/em>, s.v. \u201climinal (adj.), sense 3,\u201d\u00a0July 2023,\u00a0https:\/\/doi.org\/10.1093\/OED\/6646919567.\",\"id\":\"99a229ce-045f-41fb-8c65-c4f8f0d6808f\"},{\"content\":\"Andrews, Hazel, and Les Roberts, eds.\u00a0<em>Liminal Landscapes<\/em>. New York: Taylor &amp; Francis, 2012.\",\"id\":\"5ef85ec6-06ba-40cf-9b1c-6cfdbc4202a5\"},{\"content\":\"Derrida,\u00a0Jacques.\u00a0<em>Specters of Marx : the state of the debt, the work of mourning, and the New international<\/em>.\u00a0United Kingdom:\u00a0Routledge,\u00a01994.\",\"id\":\"1c7d2db4-f699-483d-8ce9-e818860e69f7\"},{\"content\":\"Mordvintsev, Alexander, Christopher Olah, and Mike Tyka. \\\"Deepdream-a code example for visualizing neural networks.\\\" Google Research 2, no. 5 (2015).\",\"id\":\"4f1d3f2a-fb97-4ca7-9f4b-a4b96c9b4390\"}]"},"class_list":["post-218","page","type-page","status-publish","hentry"],"blocksy_meta":{"page_structure_type":"type-3","styles_descriptor":{"styles":{"desktop":"","tablet":"","mobile":""},"google_fonts":[],"version":6}},"_links":{"self":[{"href":"https:\/\/lexiconia.art\/index.php?rest_route=\/wp\/v2\/pages\/218","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lexiconia.art\/index.php?rest_route=\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/lexiconia.art\/index.php?rest_route=\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/lexiconia.art\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/lexiconia.art\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=218"}],"version-history":[{"count":9,"href":"https:\/\/lexiconia.art\/index.php?rest_route=\/wp\/v2\/pages\/218\/revisions"}],"predecessor-version":[{"id":394,"href":"https:\/\/lexiconia.art\/index.php?rest_route=\/wp\/v2\/pages\/218\/revisions\/394"}],"wp:attachment":[{"href":"https:\/\/lexiconia.art\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=218"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}