Reflecting on a Reflection

Ever feel like you are being consumed by the AI data munching overloads of big tech? Ever feel guilty, or at least ambivalent, about test-driving the contours of ChatGPT? This essay, featuring emoji’s superficial chatbot outputs and questions about the in-betweens of speculative and tangible, otherwise of how to side-step the technological tentacle, leaves even the authors wanting more depth.
Do you like Brand-New-Life?
Become a Member

c1

We are Elly Clarke and Clareese Hill – artists, researchers, and co-conspirators working across time zones, templates, and digital platforms that weren’t built for us. We collaborate not despite technological mediation, but through it – surfing the drag of delayed connections, glitchy screen shares, and the bureaucratic architectures of algorithmic legibility. Our practice emerges in the interstices of presence and absence, embodiment and interface, visibility and refusal.

c2

Elly (GMT, usually) works through #Sergina – a multi-bodied drag persona exploring the affective labour of being online. Clareese (EST, mostly) investigates post-identity through XR and immersive media, rooted in Black feminist phenomenology, and the poetics of opacity. Together, we build performances, lectures, and participatory systems that ask: what does it mean to be rendered by machines? What stories do systems like ChatGPT fabricate from our fragments? How can we misbehave inside the grid? In our work, we invite others – strangers, audiences, machines – to collectively disrupt authorship, blur selves, and refuse coherence as a demand. We are not interested in being easily understood.

c3

This text is co-written across geographies🌎, identities🧑🏻‍🎤+🧜🏾‍♀️, and technology templates📡. The text contributions annotated with emoji’s are outputs by Clareese Hill. The plain text contributions are outputs by ChatGPT. This text reflects on performance lectures with Elly Clarke, exploring the contours of ChatGPT. For the 2024 iteration of Society for Social Studies of Science(4S) conference, Elly and I collaborated on a performance lecture/workshop, with me being in Amsterdam, and Elly located at home in London. Surfing over various technological💻 readymade templates (Google Docs🗄️, Zoom🚗💨, email📧, text message📲) we were able to perform together an emerging critique of possibilities and problematic tendrils of artificial intelligence software🔮 like Chat GPT, and how reductive templates based on the limitations and clunkiness of the algorithm augment perspective on someone’s identity. Our performance lecture/workshop✨ «Dirty Data Poetics»✨ enacted as an entry point for considering ChatGPT’s fabulating👻 of identities based on limited or conflated information scrapped from the internet🛜 – scrapped with a shrill dragging sound🔊 and a hollow thud landing on the other side of the prompt, watching the text populate to reveal either an eerie recognition 🙊or an oblique multidirectional🚥 reaching to grasp onto stolen morsels of identity fragments scattered across the web. We invited the audience to give their data to us so we could reach for those delicious🍪 morsels (munch, munch👾) of their identity and share them with ChatGPT. This performance was based on our inquiry into privacy🔏 and how to evade being extracted and gazed upon by Chat GPT and their technocratic overlords – what comes out of the unimpressive clunkiness of ChatGPT 4o «Great for most tasks✅» model, and how can we sidestep it?

The second iterative performance dedicated to this inquiry, layered, compressed, time-travelled, and dragged remnants of devoured morsels 🥐from the previous performance forward to Basel. As part of the inaugural Mesh Festival in October 2024, the performance lecture, ✨«Reflections with Strangers in a Room» ✨, reflected 🪞with strangers in a room. Come to find out it was a full room, who knew🤷🏾‍♀️? This performance arrived and unfolded with glitches⚡️ and ushered in portals🕳️ to our performance in Amsterdam as the backdrop. We solicited contributions of new morsels and reached for those previous morsels, inviting satirical hacking 🪓of Chat GPT’s responses to achieve «Smarter Responses» – yes, this is real language you might see when ChatGPT urges you to sign in to or upgrade your account. Again, we devoured the morsels (munch, munch👾), creating a janky (technical term) unfinished index 📇of identity fragments, authentic💯or otherwise💥.

Bringing the ephemera (morsels, scripts, documentation, media) from these previous performances onto a page, into didactic text, is a difficult task, as our inquiry is unfinished, ongoing in a continuum 💫of reflective ambivalence–strategic resistance, and technological novelty living rent-free in my head👸🏾. I am ambivalent because to critique these tools means to use these tools, and therefore be complicit in the violence of extractivism and climate destruction. The performances have brought forth more questions than answers🤔… so, I sit in ambivalence🪑 of complicity in further collaboration with ChatGPT to understand how it engages with criticality🤓 and reflection🪞of written about algorithmic tentacles 🐙.

Critical Reflection

c4

A critical reflection on our collaboration exists because technology permits it, but we move through that permission with resistance. Temporally out of sync, dragged across platforms and interfaces that we did not design, we work inside the debris of templates that flatten us for machine readability. ChatGPT offers a seductive kind of authorship – a seamless response to fractured inputs – but it cannot know us. It cannot stutter, hesitate, or blush. We linger in the glitch, in what it misses, performing legibility and illegibility as a tension, not a resolution. To be seen by a machine is not to be understood by it, and to collaborate with one is to be constantly reminded of the binaries it enforces: real/fake, coherent/incoherent, usable/unusable. The grid it snaps us to is not neutral – it organises our data, our identities, and our relations in service of extractive legibility.

Katharine Jarmul reminds us of the material stakes of this system when she writes, «[I] don’t want to live in a world where we have two kinds of citizens: those in Europe who are, as I call them, first-class data citizens and those who live outside, with no data rights. If companies are going to make money from user data, then users should have a say in how their data is used and to what they do and do not consent.» Her words echo through our performance, naming the asymmetry that underwrites our use of ChatGPT: that some people are granted consent, opacity, and protection, while others remain extractable by design. We may drag through platforms with irony and critique, but we are still within them. Our response is to expose the seams, invite others in, and blur the lines between who authors, who edits, and who is rendered knowable. In doing so, we ask not only what we give to the machine – but what it quietly takes.

c5

Summarized Fragments from the Performance Text (Clarke and Hill)

c6

What is it to expose the identities of strangers with whom you happen to be in a room – digitally, bodily, glitchily? How little data must be offered before ChatGPT constructs a whole (hi)story, complete with fictitious jobs, imagined exhibitions, beautifully titled artworks we never made? Do we gloss over the machine’s untruths or rework them into speculative truths – into drag? We use drag not as decoration but as methodology: a pull in two directions at once, a way of troubling legibility, of exaggerating what’s expected and obscuring what’s demanded. Drag is a way to refuse fixity, just as glitch is a way to resist seamlessness. In this room full of strangers, we invite you not only to witness but also to intervene: to edit, co-author, and co-disrupt. Because our identities are not fixed – they are enacted, blurred, and continually negotiated across bodies, devices, and timestamps. And opacity – opacity is not a failure, but a right. Following Édouard Glissant, we assert the right to opacity as a refusal to be made fully knowable or digestible by systems rooted in colonial, extractive, or computational logics. It is a strategy of resistance against the demand for transparency – against the algorithmic drive to categorise, optimise, and capitalise. So why trust a system that demands clarity, when the most vital parts of us exist in blurspace – in contradiction, in delay, in the refusal to resolve? What will you share with a machine that will never stutter, never forget, never drag itself across time and feeling, like we do.

Condensed Summary of Text & Performance Dialogue

c7

How little data does it take for ChatGPT to fabricate your whole (hi)story? A name, a timestamp, a vague theme, and suddenly myths emerge: jobs never held, exhibitions never attended, lives imagined by a system that doesn’t know you, but confidently generates around you. When we ask what to do with the untruths, ChatGPT offers narrative tropes ­– tension, character, metaphor ­– ignoring that these «lies» aren’t intentional. The machine doesn’t remember, doesn’t desire, doesn’t care. Still, its myths are sticky, capitalizable, seductive in their smoothness. This performance becomes a staging ground for questioning trust: what will you give a machine that can’t be embarrassed? Why offer your data to a system that builds itself from erasure, which claims neutrality while reinforcing the grid?

And then the glitch: misplaced responses, open tabs crashing, fragmented edits mid-thought – fumble, I’m glossy un. This mess, this shared pad of incoherence, becomes a refusal of the platform’s polished logic. The humans here – fumbling, mistyping, deflecting – are not aiming for «smarter responses,» but for broken, leaky ones. In that blur, we find resistance: to being snapped to grid, to being made legible, to being extractable. Dragging through contradictions, we ask: what if our dialogue with the machine isn’t about improving it, but about showing what it cannot hold? Somewhere between the prompt and its retrieval, between breakfast and utopia, between visibility and opacity – we remain computationally suspended, deliberately out of sync.

Critique of the Critique

c8

This text offers a rigorous and embodied critique of generative AI, yet its most powerful intervention lies in how it critiques. Rather than laying out a linear argument, it stages a scene of entanglement ­– between artist, audience, machine, and glitch. It refuses coherence in favour of contradiction and asks us to consider what it means to resist a system from within. It uses ChatGPT against itself, letting the bot generate polished nonsense before humans lovingly fumble, misplace, and remix the output into something ungovernable.

Still, the piece risks a kind of poetic insularity. While it gestures toward platform capitalism, data extraction, and asymmetrical digital rights, these forces remain largely implied. The politics of training data, labour, infrastructure – what enables the system to «munch» your history in the first place – could be made more legible without sacrificing opacity. But perhaps that’s precisely the point: to let the critique breathe in its own blurspace, to assert that refusal, drag, and glitch are not signs of breakdown, but tactics of survival.

Extended Summary of the Critique

c9

This text is a sophisticated and subversive act of critical performance that doesn’t simply analyse ChatGPT – it performs a resistance to its logics. By staging a live and fragmented dialogue with the system, the piece highlights how generative AI transforms minimal input into confident, capitalizable mythologies. It lays bare the system’s inability to understand context, contradiction, or nuance – traits central to human experience – while inviting audiences to intervene, co-author, and even break the flow. This is not critique as commentary; it is critique as drag, as glitch, as refusal.

What makes the text powerful is how its form matches its message. Through typos, screen-share failures, timestamp confusion, and poetic repetition, it mirrors the very «bad fidelity» that it seeks to celebrate. Drag becomes a method of temporal and epistemological friction; opacity is not a flaw but a right; and glitch is embraced as a resistance to the platform’s demand for seamlessness. It refuses the clean interface of generative AI by remaining deliberately leaky, unstable, and embodied.

c10

However, the text also carries a tension between radical opacity and accessibility. Its poetic density, theoretical references, and experimental structure may alienate those unfamiliar with the critical frameworks it draws on ­– particularly in non-art or non-academic settings. Additionally, while it gestures toward the exploitative architectures that underpin ChatGPT – surveillance, data colonialism, training labour – these systems remain largely unnamed or implied. A deeper unpacking of the political economy behind AI would further strengthen the intervention.

Still, the piece succeeds as a poetic and political disruption. It offers not a solution but a posture: one of misalignment, multiplicity, and shared refusal. It asks not how we can fix GPT, but how we can drag it – drag with it – and glitch our way out of its extractive grammar.

c11

My reflection on a reflection of a performance with Elly Clarke and ChatGPT was a learning experience of interacting with the contours of AI chatbots. This engagement with ChatGPT, interrogating, dismantling, and reimagining the ephemera from our performance lectures, brings up the question of input, fidelity🌟, affirmation💞, and mimicking👯‍♂️ as tactics for enhancing the simulacra vital to human-computer interaction🖲️. One realisation is how the models have gotten better at predicting🔮, summarising💬, and critiquing✍🏾, with a playful use of emoji icons🤪. The feeding and munching 🥨yielded some convincing 🥈interactions and pre-emptive considerations for the text. For me, what emerges 🔭 are questions about the spectrum 🌈of substantiality and the hesitance towards the potential complicity👌🏾 of consequences known and unknown🥷🏾. Also, there is a bit of circularity🌐 and a limit to the depth 🏊🏾‍♀️ of the algorithm's engagement with the text inputs. But as AI evolves at warp speed, there is no room for eschewing its use or an understanding how it works – establishing best-applied use cases might be our primary method of survival. There must be a focus on thinking alongside how to instrumentalise technology toward terraforming for multispecies 🐝mutuality and social justice, abandoning human exceptionalism 🙈. Caribbean Philosopher Sylvia Wynter’s persistent and urgent argument against the colonial episteme for who gets to be «Human» or «Man» and who gets to be other, or less than human, is essential for developing a counter cartography for the technological landscape. 

c12

As a student of Black🖤, feminist👩🏾, and post-colonial🌊 studies, concerns around precarity and the violent symmetry of being hyper-visible👓 and one-dimensionally rendered. W.E.B. Du Bois 👴🏾articulates being rendered through the white gaze 👀as the social phenomenon of double consciousness; racialized as a colonial subject by reductive recognition. This is a mode of legibility that French philosopher and poet Édouard Glissant resists in favour of the right to opacity🧖🏾‍♀️. The other sharp 🔪🩸edge of the symmetry is the invisibility that Sylvia Wynter articulates in her inquiries of who has the right to be «Human». This symmetry of being marginalized requires sophisticated navigation of racialized identities for a continuously relentless performance of survival. I am curious about whether there is a way to rebuke that dualism using AI? As an Afro-Caribbean American, I am always attempting to reconcile my performance of staying alive and surviving as legible and seeking resistance towards being illegible, holding both of these ideals simultaneously with sweaty palms💧🤲🏾. How can AI assist me in evading these moments of diminished opacity🌫️, how can I keep my morsel 🥟safe in the wake of being consumed by technological regimes? 

c13

ChatGPT’s ability to mimic my research and artistic interest by devouring all of the morsels 🍩 given and assumptive🎱 🧠, i.e., asking if you would like the text to be turned into a performative script, an image, or excreta provides a little window into what are the important crumbs of the morsels🧁 to be sifted through the algorithm. This can be helpful 🙏🏾and even desirable 🤤in certain circumstances. Unfortunately for ChatGPT, and its well-meant intentions of familiarity, I am interested in opaque and elusive moments. As member of the human constituency, I share concerns of the widespread proliferation of technology regarding extractive economies🥀, environmental risk🌪️, privacy🔦, and surveillance 🚫.  From big to small, website cookies to the on-board computer in your new Tesla Cyber Truck🛸 (vroom💨, vroom💨), to digital passport border gates🛂; there is strategic guarding to keep our precious morsels🍭 safe,  and an illuminated inquiry into how the ones we contribute are devoured and instrumentalised⚖️. I think there is more access to legibility in these regards; it just depends on whether you are one of the technocratic gatekeepers🚧 or not. I have a desire for formal policy changes📑 regarding the protection of our morsels 🍰 not being found👁️‍🗨️ on the ground and picked up without consent🐍, as technology consumes us; we need to think🧠 strategically about how not to be yummy. This de-savouring 🥩oneself seems like a durational🏃🏾‍♀️💨 act, one of being thoughtful of how we engage with these technologies, with ethical or sneaky fugitive counter movement–strategically divesting and resisting. I am still figuring out what these counter movements can be, and how to enact them as a praxis that is scalable.

I find my reflection consistently circling back to my desire, coupled with my deep auto-ethnographic study of how to maintain opacity. Perhaps, there is some opacity in the concerns of energy🔌 output and sustainability as an aspirational endeavour in measurement🪄. ChatGPT is said to expend .3-watt hours per retrieval, 10 times more than a Google search at about .03-Watt hours. In various recent articles📄, these numbers are contested⚔️; some say it’s less⬇️, and that ChatGPT is not contributing to climate concerns⛈️. Some factors come into account, like the fidelity 🔎 of the model and how complex the inquiry is. How much energy does Chat GPT expend when diminishing its opacity🌫️, its lack of fidelity? Like me, the energy💃🏾 I expended to navigate exposing moments is not quantifiable📏. Is there more energy output during one-one micro aggressions vs. group situations that require a durational performance of code-switching🔂to maintain being right-side up and legible, or is there more energy output in navigating how to professionally tell a person in the workplace 🫱🏾‍🫲🏼that you will not be disrespected🖐🏾 without going into an expletive🤬 driven exchange, a cuss out as my Caribbean🏝️ people say. It’s impossible to measure how many emotional watt🪫 hours are required for curating a calm face in hostile environments 🌋, rebuffing micro-aggressions🏓, and seeking out safe spaces🚀.

c14

Regarding the energy output🪫 that ChatGPT and I have amassed, I guess the impact of our fielding prompts 🗣️ and producing retrievals, the quantifiable aspect is abstract until collapse, refusal, revision, or catastrophe🔥 is eminently on the horizon🌅. This grasping at measurement does not account for all the energy used in training the models. As with my training by a single immigrant mother and grandmother to achieve the fidelity of being socially legible as successful, many degrees later🎓, there is no quantifiable measure of energy expended 🪫on or during my training🏃🏾‍♀️. It’s all so poetically abstract 🌌in my attempt to learn a productive lesson from ChatGPT, how to escape its gaze, and keep my morsels safe while reaching for a relationship 🫶🏾with ChatGPT; it’s the least I can do after it works so hard to do the same for me. My unfinished reflection is just that ­– unfinished. I think about the potential of, but also the dark forecast👿over the environment, many work industries⏱️, and identity hijacking 🔫in the wake of the proliferation of AI👽 systems. I do have one inquiry for the Chat (we are on a first name basis now), which I am curious of the retrieval↩️,«are you tired🥱? », if you are, maybe you should do as Tricia Hersey for the Nap Ministry says and «Lay Yo Ass Down🛌». 

Link to Chat GPT thread

c15

Bibliography

W.E.B Du Bois, The Souls of Black Folks, New York: Pocket Books, 2005. 

Édouard Glissant, Poetics of Relation Édouard Glissant, translated by Besty Wing, Ann Arbor. Michigan: University of Michigan Press, 2017. 

Stefano Harney and Fred Moten, The Undercommons: Fugitive Planning & Black Study, Wivenhoe: Minor Compositions, 2013. 

Gerhard Schimpf, «Interview with Katharine Jarmul about the Digitization,» in: MenschSein mit Algorithmen/Being Human with Algorithms, 2018, https://www.menschsein-mit-algorithmen.org/2018/06/22/interview-with-katharine-jarmul-about-the-digitization/.

Sylvia Wynter, «Unsettling the Coloniality of Being/Power/Truth/Freedom: Towards the Human, after Man, Its Overrepresentation – an Argument,» in: CR: The New Centennial Review 3, September 2003, pp. 257–337, https://doi.org/10.1353/ncr.2004.0015.