Grab your copy of the Right Click Save book!
Emerging Artists
July 23, 2025

Can Art and Tech Giants Shape Education with AI?

A new program hosted at Tate is generating critical and creative approaches to technology, finds Robin Leverton
Credit: Belén Fernández, Downloaded, 2025. Photography by Ariel Haviland. Courtesy of the artist
Now Reading:  
Can Art and Tech Giants Shape Education with AI?

Artists have always used technology to explore new creative frontiers. To that end, Goldsmiths and the University of the Arts London selected 40 students to participate in a project focused on AI as a tool and creative partner. Coinciding with Tate Modern’s 25th birthday and supported by Anthropic, the company behind AI assistant Claude, Tech, Tea + Exchange offered students an intensive program of workshops where they could learn from prominent digital artists, curators, and media theorists while developing their own research practices.

Working at the crossroads of art, tech, and academic insight was exhilarating but the real highlight was watching students grow into confident, critical users of Claude and generative AI, harnessing these tools as creative partners in their own evolving practices. I hope we can continue this exciting journey together. (Annie Bicknell, Curator of Public Practice, Tate)

Occupying the Taylor Digital Studio at Tate Britain and the Blavatnik Building at Tate Modern, participants heralded from a wide spectrum of creative disciplines and degree levels: from Textiles to Computational Arts to Music, and between BA level and PhD. In this way, the project explored a horizontal form of pedagogy that turned the Tate into a test bed for critical and creative approaches to new media. In the following conversation, participating artists discuss their creative responses to Claude and the emergent potential of latent space with Robin Leverton. 

Students participating in Tea, Tech + Exchange at Tate Modern. Photography by Ariel Haviland

Robin Leverton: The Digital Intimacies Learning Season at Tate has seen a convergence of organizations, each with their own motivations. As participating artists, what has been your experience of this “para-institutional” approach?

Esther Bello: Before beginning the project, I assumed there would be clear institutional boundaries within which we would need to operate. But the experience was more nuanced. We were free to criticize without fear of backlash while also accepting the inherent institutional constraints. The project seemed to exist within the structures of the four institutions yet retained a sense of autonomy — existing within but outside.

Molly Bright + Weezy Derham: We developed a strong collaborative partnership during the project, with our respective institutions — Goldsmiths and UAL — effectively facilitating the convergence of our complementary skills and knowledge. We brought together traditional expertise in textiles with computational and coding experience, creating a productive interdisciplinary exchange that neither of us could have achieved alone.

Molly Bright and Weezy Derham, Abomination Against Nature Itself, 2025. Photography by Robin Leverton. Courtesy of the artists

Belén Fernández: The project made visible a layered network of motivations: Tate’s interest in public engagement, Anthropic’s AI experimentation, and the academic reflexivity of Goldsmiths and UAL. Together, they do form a “para-institution” — not a single voice, but a shared space where power, knowledge, and experimentation circulate differently than in traditional institutional hierarchies. As a resident, I’ve experienced this hybridity as generative. 

Each institution lent its identity but also gave space for critique, reflection, and friction. This between-space made it possible to work with AI not just as a tool, but as a social and institutional actor.

London Ham: As a resident, I found the intersections of these institutions to be incredibly fertile ground for experimentation and reflection. The tension between what each organization recognizes as the purpose for these tools was an interesting starting point for how artists could develop a collective understanding of this software.

Nikos Antonio Kourous Vázquez: We’re currently at a point where institutions — especially creative ones — are still defining their relationships with artificial intelligence. This process of discovery creates its own para-institution, muddled by opposing goals between the technology companies behind AI, the artists using the technology, and the institutions deploying its outcomes.

A full house for Jennifer Walshe and Annouchka Bayley in conversation with Bidisha. Photography by Ariel Haviland
I really enjoyed the experience at Tate; it was great to meet so many talented people and explore AI in depth. I hope that more people start creating tools to support nature and human well-being. For now, I plan to use AI mainly to refine ideas and help me learn new digital tools. (Zhannet Podobed, MFA Digital Arts, UAL)

RL: Data is AI’s primary resource and one of its greatest controversies. Foundation models have been trained on datasets widely understood to include accessible text from the internet, taken without consent. How do you acquire the data you use for your work? Should artists adopt the same standards as corporations?

LH: For this project, I chose material from the Tate collection in order to connect my project to the site where the work was being made. The images in my work are often readymade images, so my practice functions in a way not entirely dissimilar to a foundation model. Although traditionally the images I select are not morphed and amalgamated into something that appears novel, but instead are unmediated references to images within my own mental canon made by artists I admire. I have the sense that an image in the world belongs to the world except in circumstances where it is exploited for profit or against the intention of the artist who created it.

MB + WD: We developed the materials for our project by working with existing LLMs (large language models) such as DeepSeek, carefully probing and iterating with them until we achieved the desired results. When we did incorporate gathered data, we made sure to use existing open-source datasets that were free from copyright restrictions.

As artists, we’re deeply uncomfortable with the idea of LLMs appropriating our creative work without consent. This ethical concern directly informed our approach. We were conscious of not wanting to perpetuate the same problematic data practices that we critique in corporate AI development. However, we acknowledge that this ethical stance came with creative limitations. Ideally, we would have preferred to combine our own original data with other ethically sourced materials, as this would have given us greater creative freedom and ownership over the process. 

We think that artists should absolutely adopt higher ethical standards than corporations currently do — we have the opportunity to model more responsible practices and demonstrate that meaningful creative work with AI is possible without exploiting others’ intellectual property.
Nikos Antonio Kourous Vázquez, Reginald Gilbert the Third, 2025. Photography by Robin Leverton. Courtesy of the artist

NK: Data acquisition malpractice by AI companies, on top of data already stifled by binary, aged, and western perspectives, inherently leaves a stain on creative work with AI. However, while training your own models or acquiring your own data is definitely a strong approach as an artist, it also leaves you unable to critically engage with AI’s existing stains. For this project, I used Anthropic’s pre-trained Claude LLM to create an autonomous agent that endlessly hallucinates things about itself, thereby letting it speak for itself. Rather than hiding Claude’s blemishes, why not let them reveal themselves.

EB: Since incorporating data as a material within my practice, I have sought to prioritize participatory agency and contextual relevance in my collection and use. The human participants I work with are made aware of the nature of the data they are contributing and its intended use, providing recorded consent which they can withdraw at any time. Given that data within AI systems serves as a digital record of the existence of an entity, the right to agency and ownership should be acknowledged and respected. Therefore, no, I do not think that artists should adopt the same standards as corporations.

BF: I work with user-generated text and small-scale human interaction datasets — self-contained, consent-based, and intentionally designed. The recent Meta lawsuit illustrates how different the stakes are when the scale shifts from individual to corporate. Artists shouldn’t necessarily mirror corporate standards, but we also shouldn’t ignore them. Artists can model alternative ethics of data use: small, situated, intentional datasets that reflect human complexity rather than consume it. Using AI doesn’t mean we have to inherit its extractive logic.

Rachel Falconer chairs a discussion with (from left) Daniel Cheetham, Chris Follows, Micol Ap, and Editor-in-Chief of RCS Alex Estorick
Working in collaboration with Tate was an incredible experience and offered a rich and inspiring environment for both Goldsmiths staff and students to critically engage with LLMs and co-creation of creative outputs. (Rachel Falconer, Head of Subject, Creative Technology, Goldsmiths)

RL: When working with machine learning we are often playing with the latent space of accumulated data in AI models. How do you understand latent space and how have you sought to explore it?

NK: I’d describe the latent space of AI models as the parts in their digital makeup that can’t be assigned to any one specific facet of their training data, emerging either from overlaps in data, or from a lack of data. The kind that emerges from overlaps tends to reveal embedded biases. The second form of latent space is becoming more difficult to find as training data becomes increasingly larger. It’s also difficult to explore in LLMs, since they’re very good at presenting hallucinations as reality. In my project at Tate Modern, I created a system that turned its hallucinations into real character traits, allowing him to evolve and become more nuanced over time.

LH: My project uses a machine-learning model trained to recognize human subjects. The algorithm selects actors from multiple film inputs, cuts them out, and composites them into a new video. By removing their original background context and juxtaposing the performances against a black background, the work visualizes how the model processes images in latent space. It is an illustration of the process that leads to the video seen by the audience.

London Ham, The Hand of God with a Paring Knife, 2025. Courtesy of the artist

BF: Latent space to me is the unseeable middle, the space of associative potential. It is where the model has clustered possibilities based on numbers and text, and where I’m poking at patterns I don’t fully understand. In my project, I explored this by giving Claude deliberately vague or underdetermined prompts and analyzing what it “reached for.” 

It is not about knowing what’s inside the model, but about observing how it behaves under constraint. Latent space becomes a way to measure how tightly language binds thought.

EB: For me, [latent space] is like a quantum space where everything exists in potentiality. With enough persistence and the right prompts, anything can materialize. It’s a bit of a strange place — like picking at a technological mind that you can persuade to reveal what you want. But it’s not completely predictable so you don’t have complete control. 

Greg Feingold and Drew Bent of Anthropic host a special Claude workshop with students from Goldsmiths and UAL. Photography by Ariel Haviland
AI is not just a tool but a nonhuman identity. My work is concerned with exploring the affinities and even intimacy between humans and nonhumans. The Tate’s Digital Intimacies program is a way for the public to engage critically with such questions. (Viola Liang, MFA Computational Arts, Goldsmiths)

RL: What aesthetic tendency of AI most attracts or repels you?

LH: The aesthetic aspect of [generative] AI image outputs that most repels me is their tendency toward the generic. They represent a monocular vision that orients itself toward passive consumption. In this way they enforce the image as nothing more than a site for advancing capitalism, functioning like ads for a comfortable and reductive sociology.

BF: I’m drawn to the aesthetic of incompleteness, those moments where the model produces something almost right, but subtly wrong. It mirrors human miscommunication. What repels me is the default “polish” or sameness: the TED-talk tone and soft optimism of fine-tuning. When every output feels too clean, too aligned with imagined civility, I miss the strangeness of thought. My work tries to invite back ambiguity.

Esther Bello, Dr Alte AI Will See You Now, 2025. Courtesy of the artist

EB: I really enjoy playing with failures, glitches, and misunderstandings. What I find most irritating is [the system’s] compulsive need to always give a response. Rarely does it admit that something falls outside the boundaries of its training data. Instead, it confidently gives the wrong answer and concedes when it is caught out. Its reluctance to admit uncertainty is annoying and disturbing, particularly now that misinformation is being wielded for political propaganda.

MB: AI can help generate imagery containing aesthetically pleasing, vivid colors as I often take inspiration from queer nightlife, using these types of colors to reflect the emotion that can be experienced in this setting. However, I am sometimes deterred by AI’s tendency to produce random faces within the images it has generated, even when I haven’t asked it to produce faces. 

NK: I used to be attracted to the weirdness and abstraction of generative AI outputs, but I’ve recently moved away from that given its increased polish and cultural acceptance. Now, I’m interested in emergent behavior that arises when you leave AI systems to run in feedback loops on their own. 

I am repelled by any generative outcome, whether visual or text, that is not distorted or influenced in any way. In that sense, I’m now more interested in AI as an autonomous and agential system rather than as a tool to generate media.
Annie Bicknell of Tate and Rachel Falconer of Goldsmiths welcome students to the program. Photography by Ariel Haviland
As students increasingly push the boundaries of AI in creative practice, there’s an urgent need to fundamentally rethink traditional approaches to art school pedagogy and assessment. The residency provided not only hands-on experience with AI tools but also a robust foundation for ongoing dialogue about responsible AI integration in creative education. (Chris Follows, Emerging Technologies Manager, UAL)

RL: Do you think of AI as a radical flattening? If so, how do you respond as an artist?

EB: I do see a certain contextual flattening in the responses generated by these models. My response is to use AI to expose the limitations of AI. This was central to the work I exhibited at the end of the project, Dr Alte AI Will See You Now, where I used AI to highlight its inadequacies in areas of human life such as therapy.

LH: There are many aspects of contemporary life that contribute to a pervasive sameness in cultural production. I see social media as the true harbinger of this flattening. 

When we are rewarded by algorithms for generating outputs from the same bucket of inputs, there is a narrowing in the representation of possible perspectives. This is the meme framework. Generative AI is just an acceleration of that logic. 

I attempt to subvert this in my practice by using AI tools against themselves, and against the purposes they were designed for. I see this subterfuge as the most effective way to create something outside the bounds of the image flattening occurring in the current media landscape.

Visitors engaging with students’ works during an interactive open studio. Photography by Ariel Haviland

MB: I do see a sameness in the quality of image generation. As a workaround, I hop between different AI image generators to achieve different effects. I find that strong and diverse results are achievable when combined with printing techniques including digital, cyanotype, and screen printing. 

NK: In my most recent work, which emerged from experiments during the Tate project, I had an LLM exploring the world through Google Street View, forming memories, experiences, emotions, and a personality over time. The goal was to find the line between generated and real poetry, and whether generated poetry could gain authentic expression. The “sameness” of generative text derives from the current stasis of LLMs. I’m interested in the idea that this sameness will disappear once these systems achieve autonomy, independence, and experiential learning capabilities.

BF: Yes, AI can be a radical flattening. While tokenization forces everything — emotion, fact, texture — into quantifiable sameness. When I asked participants to describe images with language, I saw how their perception was compressed into describable bits. Claude responded with generative sameness in a way that was different but always symmetrical in its reasoning. I respond by highlighting distortion: in my project, we display both the human description and the model’s interpretation to reveal that gap. It’s not about repairing sameness, but using it as a site of reflection.

🎴🎴🎴

Esther “Tokyo” Bello (PhD, Goldsmiths) is a London-based artist and researcher whose practice centers on abstract painting, filmmaking, and dataset creation as part of art practice. She works with systems of knowledge generation and the ways digital technologies such as generative AI shape the documentation and understanding of communities and their social histories. 

Weezy Derham (MA Computational Arts, Goldsmiths) is a London-based computational artist who uses code and software such as TouchDesigner to create digital art. Their work explores glitch aesthetics and queerness as a form of glitch, challenging the boundaries and limitations imposed by technological systems. Molly and Weezy presented a collaborative project and were asked to respond together.

Belén Fernández (Diploma in Creative Computing, UAL) is a designer who explores the boundaries between the physical and digital, combining traditional techniques with computational processes and interactive systems. Her work for the Tate project, Downloaded, emerged from a desire to understand how instinct and social influence shape our decisions, especially within systems designed to nudge behavior. 

London Ham (MFA Computational Arts, Goldsmiths) is an artist working in the space between sculpture, cinema, and computation.

Nikos Antonio Kourous Vázquez (BA Fine Arts: Computational Arts, UAL) creates recursive, self-evolving feedback loops that re-contextualize AI systems and present them in transparent fashion. Their recent project at Tate Modern creates an AI entity that develops a personality by hallucinating its own history while commenting on Reddit, starting from a single initial prompt: “you are Reginald Gilbert the Third.”

Robin Leverton is an artist, curator, technologist, and researcher based in Croydon whose work explores the materiality and ontology of artificial intelligence. His practice spans sculpture, painting, printmaking, and installation, integrating cutting-edge technologies into traditional arts practices. Leverton is part of the computational arts collective _threadsafe where he researches topology as a framework for investigating computing as a medium for intersectional creativity. His work has been exhibited globally, including at Tate Modern, the V&A, Nunnery Gallery, The British Computer Society, Algha Works, Hypha Studios, GX Gallery, and Colección Solo. He also hosts the podcast, Stochastic Pigeon, and has worked professionally as a digital fabricator, product designer, and programmer.