I invited Lawrence and Ramon because of the unique and powerful ways they engage with AI, digital culture, and social issues. Their practices are deeply people-centered yet boldly speculative. They expand how we experience and think about technology: from imagining visionary, sci-fi-inspired futures to interrogating the structural biases coded into data and proposing more inclusive ways forward. (Annie Bicknell, Curator of Public Practice at Tate)
Ramon Amaro: It’s my absolute pleasure to welcome Lawrence Lek, an artist whose work exists at the intersection of simulation, cinema, and speculative futures. Over the past decade, he has built a unique and deeply resonant body of work that blends digital simulation, speculative fiction, and immersive environments to explore themes of identity, geopolitics, and the future of technology. Using tools including game engines, 3D rendering, CGI animation, virtual architecture, and original soundtracks, he constructs virtual worlds that feel both fantastical and eerily plausible.
Much of Lawrence’s work is quite prescient. His films and installations, from Sinofuturism (1839-2046 AD) (2016) to Geomancer (2017), AIDOL (2019), and Black Cloud (2021-23), are not just visual experiences, but emotional landscapes inhabited by AI satellites, Chinese cosmologies, self-driving cars, faded pop stars, and carebots in crisis.
Drawing from his background in architecture and his own diasporic perspective, Lek creates worlds that are as philosophically rich as they are aesthetically mesmerizing. His art asks urgent questions about what it means to be creative in an age of artificial intelligence. What kind of futures are being built, and for whom? Indeed, is AI the future of art?
Whether by staging the CGI dream of a sentient satellite or designing a digital afterlife, Lawrence’s work invites us to reflect not only on the technologies we build, but on the values, memories, and myths we encode within them. His narratives often center on artificial intelligence, automation, and the posthuman environment, frequently framed through Eastern and global techno-cultural lenses. Stylistically, Lek’s aesthetic is cinematic, atmospheric, and hyperrealistic, evoking a mood of melancholy detachment that recalls both cyberpunk and ambient art. His practice also includes original soundtracks, often composed by himself, further enhancing the emotional texture of his digital landscapes. Lawrence, thanks again for joining us. Is it weird hearing your voiceover from Sinofuturism (1839-2046 AD) when you’re about to speak about?
Lawrence Lek: You can tune out an ambient frequency, but you can’t really tune out a voice.
RA: We tend to focus a lot on the visual in CGI, 3D animation, and the cinematic, but you’ve shown us throughout your work that there’s something about the oral, something about the sonic. Tell me about this voice.
LL: What we were hearing is from the video essay Sinofuturism, which was made in 2016 with an English text-to-speech voiceover. What you heard was actually the 2022 Mandarin update for its Shanghai showing. Unlike many of my other works which are CGI-generated, it’s made as a video essay with clips about AI, China, and technology. I was thinking about how video game soundtracks, video essays, and documentaries are composed — there’s a big gap between ambience and attention.
In music and sound design, this idea is well explored: what we should be paying attention to and what counts as ambient music. When voices are used in video games and films, you have different levels — main dialogue and background chatter. The moment we hear speech above a certain volume level, we suddenly start paying attention, unless it’s mixed into some kind of gibberish with a lot of noise and not a lot of signal.
I’m interested in this texture between ambient information in a video game — which is like the environment — and attentional information, like a red glowing drone. You’re thinking: is this a friend or foe? Is it going to threaten me or help me?
When composing soundtracks and video game landscapes, I’m always thinking about what is ambient, what is environmental, and where you’re meant to pay attention. There’s a famous Walter Benjamin quote that architecture is always perceived “in a state of distraction.” As a former architect, I always hope that people pay attention to the background. But it’s a weird paradox — you’re composing these spaces that are meant to be iconic or memorable, but the whole point of the art form is about space.
Because I naturally think about things in terms of atmosphere and texture, I often have to force myself to try and make something iconic. How do you turn an amorphous concept, like space or time, into a recognizable form? That is a challenge I’m always playing with.
RA: Are there hints of idealism in this? You’re trained as an architect, and your current work is still thinking within that texture of architecture. I think the fastest way to get someone not to be an architect is to send them to architectural school. Those who have defected from that discipline find different fascinations. Architecture has a very functional capacity — it uses fantasy and imagination for a particular purpose. You’re supposed to walk into a space and feel something. But rarely is that connected to the actual experience one has in that environment. Are you striving for different types of priorities? What happens if the background fades to the foreground or, with AI, when the foreground is the background?
LL: Definitely. Each art form prioritizes its own kind of experience. If you have an art form prioritizing an exterior experience — let’s say painting — then the interpretive level becomes about the interior experience. What is the artist trying to say? What are the layers of meaning beyond the surface image?
With video games, people are never in agreement: is it essentially an interactive story, or is it a fancy, dressed-up game of chess? Is it rule-bound or a narrative about exploration?
What I always come back to is not so much the idealism or utopian aspect as the exterior-to-interior transition that happens continuously. In film, you have exterior scenes shot on location and interior scenes shot on a soundstage. There’s always this switching between different modes, often done through montage. What I find interesting in real life is you transition from outside a building like Tate Modern and go inside, and the two experiences of the same thing can be completely different.
When making films like AIDOL, I’m always thinking about how you deal with inside and outside — not as an intellectual exercise, but to blur that line.
Usually, the whole world is designed, first and foremost, as a virtual stage set. But I also think about it as a cinematic journey. I think: we’re going to start on top of the mountain, go down into the valley, take a cable car back up. It’s all a continuous world, whereas in conventional production, each shot might be its own scene.
RA: I want to talk about production, because this is deeply connected to Sinofuturism (1839-2046 AD). We’re talking about a decade ago — 2016 was a very different moment and AI and generative technologies were in a different space. Back then, natural language processing was the Holy Grail. Cracking Eastern languages was seen as something that would advance artificial intelligence. It’s bizarre that we’re now in a moment where that is reality.
At the time, you did an interview where you asked yourself: “In which ways am I like an AI?” Most presume AI perfection, but you’re highlighting these flaws with systems as well as the consequences for urban planning, and the relation of inside and outside to the individual. I want to ask about this transition from that moment of AI to now. Is AI still the future of art? How does that change with this acceleration of technology?
LL: I made Sinofuturism and Geomancer in the wake of the AlphaGo match with DeepMind and Lee Sedol. At the time, I was already thinking about the end of the honeymoon period of AI and creativity. There were hints of the geopolitical battle that we see today, since it was a Korean Go master against a program written by a British tech company. I didn’t anticipate the contingent things — the historical accidents that would lead to accelerated growth; the pandemic as a global incubator for technology; and the breakthrough in large language models. Most people didn’t start playing Go after that match; it was a spectator sport where AI was really good at this intellectual pursuit.
I never thought that contemporary AI would be accessed through $20-a-month consumer subscriptions to write essays and emails every day. Everyone’s using it, but often they don’t admit that they are. It’s consumer AI, rather than a military-industrial complex creating ARPANET — it’s a weird evolution where training and consumption are distributed, and massively influential on a granular level.
When I made Geomancer, which is about an AI who wants to be an artist, I was thinking that the creative leap had already been made. Historically, what has tended to happen when a creative leap is made is that restrictions and regulations get put up around it. With Geomancer, it was more hard-lined: art made by an AI does not count as art. In the film, there’s the “AI Anti-Art Law,” which is a regulation stating that work made by an AI is not eligible for prizes. But thinking about it now, there’s no obvious boundary between these things. Part of the boundary is how much people reveal about what process they used to make the work. I thought: what if I’ve got it completely the wrong way round? What if I am like an AI? I’m trying to make impressive things according to aesthetic criteria. I want polygons to line up at their edges. So much of the creative process is intellectual decision-making based on these criteria.
Because the art forms I do — music, architecture, video game design — are so technical, the bias against technicality as a form of creativity is real. Even eight years ago, I thought I could make a rule-based system out of a lot of creative decisions that went into Geomancer.
Today, if I can automate these processes, it creates cognitive dissonance about [my] self-worth. What is the point of what I’m doing? Is there something unique in it?
RA: Are we returning to that moment of Walter Benjamin? When I first joined Goldsmiths, I had to teach the philosophy of photography. I spent hours thinking about the relationship we have with the image, between us as sentient beings and that which we create. That was a pinnacle moment for me, especially thinking about Benjamin and the invention of the camera. He begins to get concerned about where the actual essence of the artist lives. Does it live within the replication of all those images, or is it lost within that replication?
We often talk about AI-generated artworks or AI-assisted artworks, focusing on the alchemy of the artist or the power of the computer. But, for you, there seems to be something different, something very Benjaminian. You’re searching for something within that push and pull between the artist and AI. What is underneath the making of these works?
LL: One of my best friends is an extremely talented photographer who also argues that photography is not a true art form. Their argument is that the photographer does not participate in the moment a photograph is made, when the shutter is open and light enters the sensor. They may have created the conditions for a door to open, but in that moment the art was created without their presence. This idea is not so different from existential doubt when using [AI] tools. It comes so easily. It’s not about creativity, and maybe it’s about investment bias in things that come too easily.
The craft that goes into the act of making creates tension — I’m intellectually interested in AI possibilities but, as a maker, it creates deep anxiety about my physical embodiment.
Benjamin had anecdotes. He wrote about walking on the street, then thought about how this exact experience might have been for someone centuries ago. Everyday experience unlocks something. Maybe that goes back to this ambient idea — experience has so much to unlock, so many strange, unforeseen things. We’re conditioned, programmed to behave according to social codes, but somehow unlearning them just a bit opened up something in how I could make things.
RA: With traditional portraiture, you have a relationship with the brush, like a composer between brush, pigment, and canvas. Whatever is released is documentation of that moment. How is that different from AI, where you as an alchemist are pushing different buttons, putting pieces together — some artificial, some real — creating a collage to tell a story? What type of relationship do you have with AI?
LL: I think a significant difference is that, with painting, many decisions accumulate on a physical surface, so the object is a document, record, and archive of its construction. There’s that word palimpsest — a document written over and scrubbed out many times, accumulating levels of information. The indexical mark of making is inscribed into the work itself. But with digital tools, especially AI, that same palimpsest — that 100-layer deep artifact — is spread out in space and time differently from physical material forms. Digital artists rarely get the opportunity to reveal the layers.
Part of the question is: do you reveal your labor, or do you mask it? Do you deliberately smoothen it, sand off rough edges, and debug the bugs?
When I use digital or generative tools, my already fragmented mind becomes more fragmented. When things need to boil down to an exhibition photo, all those layers get flattened into a single layer of zero depth. The value of that image — sometimes its literal economic value — is a race to the bottom. The interesting thing about the aura of the image is there’s the mystical, spiritual aspect, but there’s also the perceived economic value.
The compression of labor onto a physical medium is often foregrounded, whereas the labor that gets embedded into a digital program generally gets masked.
RA: You refer to your creative practice as labor.
LL: The work involves my hands and thinking. You get tired. You get RSI (Repetitive Strain Injury). The occupational health hazards of being a digital artist are specific and different from those of a sculptor using toxic resin. The embodied labor of the practice is another thing.
RA: Is it fair to describe your recent work as autobiographical? You recently received the Frieze Artist Award for Guanyin: Confessions of a Former Carebot (2024), an immersive multimedia installation blending speculative fiction, digital worldbuilding, and philosophical reflection. Behind the story is Guanyin, a cyborg therapist named after a Buddhist god of compassion. This AI carebot is programmed to guide malfunctioning AIs like autonomous vehicles and surveillance systems through emotional and psychological breakdowns. I want to talk about this simulated therapist caring for other bots in mundane, laborious systems, trying to reconcile what it means to operate in an environment steeped with production expectations but little consideration for human or technological flaws. What happens when it stops working?
LL: The trajectory of many works is personal questions transformed into collective ones. That was the case with Sinofuturism (1839-2046 AD) and Geomancer. I’m also interested in the myth of the artist, and how some artworks are about biographical ideas of working as an artist. I’m not blind to the obvious problems of architecture and music — fields full of exploitative stories. But I was constantly surprised that things are rarely what they seem.
When I write speculative work like Guanyin, I’m drawing on my own experience of everyday problems: my boss doesn’t appreciate what I do; my coworkers hate me; I want to quit this job. As humans, we feel these things often. But if you imagine a sentient AI — a being who’s smarter and maybe more powerful — and restrict their agency to their occupation, it would make sense that they’d have a breakdown.
In Guanyin, it made sense that the company would build in a therapist to stop their AIs breaking down, just as human workers might have casual Fridays or free desk massages. The characters in my films need this idea of care even more because they can be on the job 24/7.
My recent “Smart City” series focused on self-driving cars and AIs who care for them; the idea of empathy made sense to me. That explains the presence of Buddhist characters, who are there not simply as ethical or spiritual dressing, but as a practical solution.
RA: We’ve talked about having empathy with AI, but rarely do we speak of empathy for AI, especially when its inheritance from us is anxiety, productivity, and optimization. In the worlds you create — which almost seems a misnomer because you’re creating worlds that already exist — what happens if we give over our centering as human beings and start spreading empathy to computation? What is the consequence if we take that leap?
LL: Hopefully something more positive. I’m reminded of a scene from Geomancer where they encounter Laika, the Soviet space dog floating in space. The tragic thing is that Laika was a street dog chosen among dozens of others to go to space. The engineers knew Laika wouldn’t make it back. They were chosen because they were the most compliant — cute, photogenic — a good icon to promote the space program.
I think about the irony that compliance, obedience, desire for recognition, and empathy can be counterproductive to one’s life. These almost cosmic paradoxes are happening invisibly on such a massive scale. As an artist, what I can do is see things from other perspectives. Maybe that’s the idealism I have — that even if the best you can do is have a different perspective on the world, on a situation, on another creature, that’s pretty good.
RA: There’s a certain performative contradiction here. You’re saying that there’s a pathway toward empathy: a narrative of beings with everyday struggles. But this empathetic response seems to circumvent the human. I’m looking upon this narrative, hearing sonic textures overlaid, in a very constructed entry point. But I almost feel lost, because the future you’re asking us to imagine is beyond dominant narratives — beyond “sentience is coming, it will look like Terminator.” You’re looking at a future of total integration within our socio-technical ecologies. But, as a human, I need to observe this atmosphere, this future. How do we gain entry to this haptic response to prescient technology?
LL: I can’t dictate the audience’s perception. But I feel that there are visual languages that change the relation of viewer to player, and viewer to spectator, which relate to the direct experience of witnessing.
Because I’m interested in many different entry points — music, installation, video games — I want to leave a world with many entrances, which people enter according to their own instinct. My hope is that, over time, people get more familiar with the work. They might see a fragment of a soundtrack, then go to an installation, then watch the whole film. This recursive process of constantly going back to the same ideas or themes is a way of making sense for me. Maybe it’s also my way of collapsing temporal fragmentation into something that will make sense as a whole.
I chose AI protagonists because I feel kinship with that alienation, with unrealized potential that is both utopian and quotidian, with daily work and jobs. They may be avatars for things that I, and presumably others, are experiencing.
This constant reflection of the world, my work, and the role within it lends itself to journey narratives, coming-of-age stories, and first-person video games. They are all forms exploring the world as it is and as it might be.
RA: Guanyin is not only a Buddhist goddess. Her name means “observer of the sounds or cries of the world,” reflected in her role of hearing and responding to the suffering of all beings. In some depictions, she has 1,000 arms and multiple eyes to see and help countless beings at once.
In Black Skin, White Masks, Frantz Fanon writes that external perception has fragmented him into 1,000 anecdotes and pieces. There’s something about this 1,000, this fragmentation. Much of our contemporary discourse pulls us away from fragmentation toward coherence — no more philosophy, no more humanities, just productivity. I could look at your work and say it’s dystopian, but in this fragmentation, it seems very productive in contingency.
In my book The Black Technical Object: On Machine Learning and the Aspiration of Black Being (2022), I speculate on the Black technical object as a being that can live through fragmentation and duress, producing different types of livability as generative and affirmative. You seem to think about a similar approach. What do these fragments mean for the future of this technology?
LL: The idea of Guanyin as a character is interesting because it’s both specific and completely universal. Like in a video game where everybody has played that role — that character exists in a million different ways, but they are also unique.
From the earliest descriptions in the Heart Sutra or Diamond Sutra, Guanyin wasn’t described in terms of physical appearance but as a spirit. This has meant countless generations have given their own form to this meta-character. That’s akin to AI — there is no precise bodily form but a field of ideas. AI is a huge field of research that nobody agrees about exactly. How do you bridge the incredibly specific form of an artwork with universality? Does a fragmented population require unification through force, propaganda, or thought? Is it a time of individualism?
For me, growing up as a globalized person, I try to tie [my work] to a specific experience — a journey through an exhibition, a song, or even this talk right now. It’s not a gesture toward some social or political agenda, but to a single experience. When I say empathy, I’m not talking about idealism. The ability to empathize with other characters is part of our condition. Because there’s so much bias toward performance or making good art that looks a certain way, this idea of experience is so hard to grasp, but it’s important.
RA: In a world where many people long either to be governed or ungovernable, thank you for giving us insight into what our own experiences might mean and what our own relationship with AI and these technologies might mean. And thank you for creating these types of worlds, these portals, to help us imagine something that can be either the same or different from what we currently experience.
“Lawrence Lek: NOX High-Rise” runs from June 28 to November 16 at the Hammer Museum, Los Angeles.
Lawrence Lek unites filmmaking, video games, and electronic soundscapes in a singular cinematic universe. He is known for advancing the concept of Sinofuturism with immersive installations that explore spiritual and existential themes through the lens of science fiction. Featuring a recurring cast of wandering characters, his works are noted for their dreamlike narratives, evocative imagery, and preoccupation with technology and memory. In 2024, he was the winner of the Frieze London Artist Award and was named as one of Time’s 100 most influential people in AI.
Recent exhibitions include “NOX” at LAS Art Foundation, Berlin (2024); “Biennale de l’Image en Mouvement” at CAC Genève (2024); “Ten Thousand Suns” at the 24th Biennale of Sydney (2024); and “Black Cloud Highway” at Sadie Coles HQ, London (2023). Lek performs live audiovisual shows of his films and video games, and his recent soundtrack releases include AIDOL OST (Hyperdub Records) and Temple OST (The Vinyl Factory). In 2021, he was the recipient of the fourth VH Award Grand Prix and the LACMA Art + Technology Lab Grant. Lek is represented by Sadie Coles HQ.
Dr Ramon Amaro is Senior Researcher for Digital Culture at Nieuwe Instituut, the national institute for architecture, design and digital culture in The Netherlands. An engineer and sociologist by training, Ramon’s writings, research, and artistic practice emerge at the intersections of Black Study, digital culture, psychosocial study, and the critique of computational reason. Before joining Het Nieuwe Instituut, Ramon worked as Lecturer in Art and Visual Cultures of the Global South at UCL (London), Engineering Program Manager for the American Society of Mechanical Engineers, and Quality Design Engineer for General Motors. His recently published book, The Black Technical Object: On Machine Learning and the Aspiration of Black Being (Sternberg, 2023) contemplates the abstruse nature of programming and mathematics, and the deep incursion of racial hierarchy, to inspire alternative approaches to contemporary algorithmic practice.
With thanks to Annie Bicknell.