The RCS Book Is Here!

Purchase a ClubNFT subscription and get the RCS book Free!

Get Your Copy
Interviews
April 14, 2025

When We Became Posthuman

One of the world’s leading scholars is rethinking cognition to account for both humans and nonhumans
Credit: Damien Roach, Artefact 7 (Denisyuk) [iii], 2024. Machine learning model trained on Denisyuk holograms. Courtesy of the artist
Now Reading:  
When We Became Posthuman

The mass adoption of digital media has had a profound impact on planetary life: from engineering human behavior to generating nonhuman agents to extracting natural ecosystems. One scholar who has sought to come to terms with the ways technology, culture, and nature are folded together is N. Katherine Hayles, whose renowned book How We Became Posthuman: Virtual Bodies in Cybernetics, Literature and Informatics (1999) cemented posthumanism as an alternative to the corporate cult of homo economicus. Since the 1970s, Hayles has explored the ways in which humans “think through, with, and alongside media,” using an interdisciplinary approach to study the relations between literature, science, and technology. 

In her new book Bacteria to AI: Human Futures with our Nonhuman Symbionts (2025) Hayles proposes an “integrated cognitive framework” that embraces the cognitive capacities of all life forms. In this special conversation with Jesse Damiani, who theorized postreality on Right Click Save, Hayles explains the need for a new theory of mind in a world where humans and AI are now inseparable. The text is accompanied by works of synthetic photography by Damien Roach.

Damien Roach, Zoning (iv), 2024. Courtesy of the artist

Jesse Damiani: You’ve woven a lifetime of scholarship into your new book, Bacteria to AI. Every topic feels almost like a hyperlink to a whole book of its own. I think the sensible place to start is to ask you to describe what the book is about and what you’re arguing for.

N. Katherine Hayles: The idea that I’m pursuing here is trying to rethink human cognition in relation to other forms of cognition, both synthetic intelligence and also nonhuman cognition. And that goes back to my book, Unthought (2017), where the first step in that journey was to make the argument that conscious cognition is only a small part of human cognition. Nonconscious cognition, and I’m calling this nonconscious — not unconscious — cognition is all those parts of cognition where we don’t have to think about how we sit in a chair. We don’t have to think about how we bring a fork to our mouth. All of that is handled for us nonconsciously. And that leaves consciousness able to focus on very specific tasks. 

Nonconscious cognition has been shown to be faster than consciousness in its uptake and is able to decode information too noisy for consciousness to make sense of; so consciousness is really built on a much older evolutionary capacity of nonconscious cognition, [which] is intimately tied into our bodily senses, proprioception, our internal senses of what our gut is doing, and so forth. It processes all that information and then forwards things that it notices to consciousness, and consciousness can either uptake that or ignore it. If it ignores it, those neural signals die out after half a second or so.

It’s as though nonconscious cognition was pulling at the sleeve of consciousness, saying, “Hey you want to listen to this?” And sometimes consciousness says yes, and sometimes it says no. But the upshot is that nonconscious cognition is more in touch with what is actually happening in our body and in the world than is consciousness.

Damien Roach, Locus (ii), 2024. Courtesy of the artist
Consciousness is dominated by the narrative faculty, and the purpose of consciousness insofar as its narration is to make the world make sense. Well, sometimes the world doesn’t make sense. 

Highly anomalous things happen, and often consciousness simply screens out those bizarre things, but the nonconscious knows that they’re happening. So we have this bifurcated mental capacity. The main point I wanted to make in writing the book was that you do not need to be conscious to be cognitive. That really opens the way to consider the cognitive capacities of all the other kinds of life forms that are not conscious. Nonconscious life forms are much more numerous on the planet than are conscious life forms. Conscious life forms comprise something like only five or ten percent of the planet’s biomass. 

Point two was to begin to think seriously about how you would put human cognition capacities in relation to nonhuman and artificial capacities. That’s really the task I turn to in this present book, Bacteria to AI. I suggested in the book that we think about human cognition in what I call the integrated cognitive framework, whose purpose is, first, to insist that all life forms have some cognitive capabilities, even very simple life forms like bacteria and microbes like viruses and so forth. If they’re alive, they have ways to process information from the environment, to interpret that information, and to act on that information. In fact, all life forms must be able to do that in order to be able to survive, that’s been shown now for life forms like yeast, bacteria, even viruses. The second part was to begin to explore relationality, how we would want to define our relation to nonhuman life forms; and then, of course, the big current question of how we want to define human cognition in relation to artificial cognition. 

JD: Core to that is a drive to decenter the human, to get away from anthropocentrism. Why is that an important task to you? What are the stakes?

NKH: In the introduction to the book, I point out that thinking that human cognition is superior to every other form of cognition has been a driving force in our present ecological crises. It has led to the belief that humans are superior, that they have the right to dominate every other species on the planet and to exploit planetary resources for human ends without thinking about the cost to the environment. It really is a misinterpretation of the way that the human being is completely interpenetrated by nonhuman life forms. 

It’s a fact that the number of nonhuman cells in the human body is about ten times greater than the number of human cells. And we absolutely depend on those symbiotic relationships with bacteria, for example, or even with viruses. We’re all collectivities of organisms as much as we are individuals. 
Damien Roach, Artefact 7 (Denisyuk) B [i], 2024. Machine learning model trained on Denisyuk holograms. Courtesy of the artist

We’re holobionts, as the expression goes. We contain within ourselves multitudes, as Walt Whitman said. That’s literally true, not just a metaphor. We do contain collectivities within ourselves. This has a bunch of policy implications. For example, it’s a disastrous mistake to think that human agency is the only agency working in what I call cognitive assemblages — that is, collectivities that include artificial intelligence as well as nonhuman intelligence. The first step along this journey is to recognize that agency is distributed, cognition is distributed, and that our most important relationships — with both humans and nonhumans — are symbiotic

JD: You compare the idea of ecological relationality to the liberal humanist project, looking at how humanism is proposed as a closed system. 

NKH: We inherited the idea of liberal humanism from the Enlightenment. In many ways, liberal humanism was a liberation from, for example, serfdom. It emphasized free will, rationality, and was founded on the idea of self-ownership. John Locke was one of the philosophers who formulated the thesis that, first of all, you own your own body; you own your own self; you own your own labor. And because you own your own labor, you can use your labor to build an investment for yourself; you can improve your situation through your own labor. Well, of course, that existed simultaneously with all forms of indentured servitude and slavery. 

It was somewhat myopic to say that liberal humanism was a liberatory force. It was for those privileged elites who could benefit from it. 

But as I point out in the last chapter, Lisa Lowe has made the argument that those advances in human welfare could only occur at the expense of servitude elsewhere, that there was a sort of zero-sum game. Middle-class English people, for example, purchased their leisure by the servitude of others, on plantations and so forth, who had no leisure at all — it was always more a dream than a reality. But as we’ve gone now from the 18th century into the 20th and the 21st [centuries], the limitations of focusing on a philosophy that celebrates individualism, rationality, and free will become more and more apparent.

Damien Roach, Afterimage (i), 2024. Courtesy of the artist

In my book, How We Became Posthuman, I was already talking about the ways that technological developments had brought the notion of free will into debate. It’s not so clear that free will really operates in a cognitive assemblage that includes, for example, artificial intelligence. I was way overdue to begin to rethink the premises of liberal humanism, and I tried to do that without ignoring the benefits of liberal humanism at the same time. There’s no doubt that someone working for wage labor was in a better situation than someone in a position of being a serf to a landowner. That was an improvement. We can’t deny that that was an improvement; so how do you keep the valuable parts of liberal humanism while jettisoning the harmful parts?

The notions of rationality, free will, and individualism need to be rethought. 

Rationality because it’s not clear that nonconscious cognition is rational in the sense that one thinks of a logical syllogism as being rational. It’s [also] not clear that one has free will when one is operating in a complex assemblage that includes all kinds of other factors. And it’s not clear that one is an individual when all humans are holobionts. We contain collectivities — so those are among the principal ideas that need to be rethought.

Damien Roach, Artefact 7 (Denisyuk) B [ii], 2024. Machine learning model trained on Denisyuk holograms. Courtesy of the artist

JD: One’s immune system is vastly different from that of somebody 5,000 years ago, which is a function of interacting with an environment and having a microbiome. Even that simple factor demonstrates that one is in literal relationality with the world. You sketch this micro/evo/techno-relationality framework. 

NKH: My idea about the micro/evo/techno is to talk about each as emergent phenomena. Karen Barad’s book, Meeting the Universe Halfway (2007), made the argument, based on Niels Bohr’s philosophy, that quantum phenomena don’t come into reality as such until they’re measured. That’s a really hard idea to grasp: that somehow reality as such is indeterminate. Of course, we’re talking about quantum phenomena here, so it’s indeterminate on a very small Planck-level scale. Nevertheless, that’s a very important philosophical point. It’s only when we interact with reality that we bring it into existence, either as a particle or [as] a wave. That was the famous binary that Barad was focusing on. 

How did we get to be multicellular beings in the first place? Well, the only possible way that could have happened is some kind of emergent phenomenon beginning in unicellularity. We know that the first life forms on the planet were unicellular, and one of the big developments in evolution was when cells started moving from prokaryotic cells that don’t have a nucleus to eukaryotic cells that do have a nucleus, and at that same time moved from cyanobacteria that metabolized sulfur and hydrogen sulfide to oxygen-using cells. That was a huge development because oxygen proved to be a more powerful energy source, and it therefore enabled the evolution of more complex life forms.

The third component is when humans begin building and interacting with technology [using] their cognitive skills. Here I was drawing on the work of a French philosopher, Bernard Stiegler, [who] made the argument that humans, as Homo sapiens, emerged from previous hominid species through their interaction with technology. And by technology here I mean something as simple as the domestication of fire. [That] led to enormous changes in the jaw, in the tongue, and consequently to the development of human languages. Fast forward to the present, we now have the Homo species as the dominant hominid strain, but our evolutionary trajectory is more tightly integrated with technological advances than it has ever been. 

It’s no exaggeration to say [that] from this point forward, the evolutionary trajectory of humans is going to be indissolubly bound up with the evolution of artificial intelligence. 

We now have three “first-order” emergences: quantum phenomena, evolutionary developments, and the developments of humans. Those were separate domains; now they’re beginning to interact with each other in what I call “second-order” emergences [such as] gene editing. In gene editing, you have bacteria interacting with computational media, and that leads to a new kind of thing where we can actually change other species as well as ourselves. These interactions are now accelerating. That presents both enormous opportunities and existential risks. 

Damien Roach, Zoning (i), 2024. Courtesy of the artist

JD: In your discussion of technics, you reference Stiegler’s description of “the pursuit of life by means other than life.” I love that distillation. AI is a major part of this, perhaps posing existential risk, as do gene editing and xenobots. What are the risks to consider when life proceeds by means other than life?

NKH: Gene editing is a perfect example. Gene editing operates by using the cognitive abilities of bacteria combined with technological means. The result is to be able to create new life forms; so you have life contributing to that, but you also have technology contributing to that. With gene editing, life proceeds — that is, goes along an evolutionary pathway — by means other than life [via] inanimate, nonhuman, technological, and computational means.

Human cognition is being augmented, accelerated, and fundamentally transformed through its interaction with computational media, especially with AI. I think that factor is going to be more and more important in determining the kinds of human futures that we have.

JD: You differentiate nonconscious cognition from “unconscious” cognition. Could you explain the difference?

NKH: If we talk about unconscious cognition, we could reference Freud or Lacan. But in both Freud and Lacan, the unconscious is built out of repressed material from consciousness. The unconscious, both in the Freudian and Lacanian schemes, is a kind of back formation from consciousness. They presume that consciousness comes first. There’s some trauma in Lacan, including the trauma of being born, or in Freud some kind of sexual trauma that causes consciousness to wall off part of itself. And that part of itself is loosely referred to as the unconscious.

Nonconscious cognition is not a back formation from consciousness, it precedes consciousness. It’s not something that is walled off from consciousness; it’s in continuous communication with consciousness. Nonconscious cognition is not built on trauma. It’s built on an intimate relationship with both bodily and internal senses — so nonconscious cognition is, in my opinion, more capacious, more important, and more pervasive than unconsciousness.

Damien Roach, Locus (iv), 2024. Courtesy of the artist

JD: In integrated information theory, consciousness is framed as a gradient. Do you see the integrated cognitive framework as a similar type of spectrum for cognition?

NKH: Yes, I do think of [it] as a spectrum. I think it’s a mistake to begin to think of consciousness as a binary, where you’re either conscious or you’re not. There are all kinds of gradations that go into that.

JD: You brought up Karen River Barad, and we’ve discussed Stiegler and Technics. When you were talking about the tribraided aspects of Bacteria to AI, Donna Haraway also factors. What do you draw from Haraway in the micro/evo/techno? 

NKH: I was particularly focusing on Haraway’s idea of sympoiesis, a combination of symbiosis — a kind of close relationship between two different species — and the last part of autopoiesis. Poiesis means making [and] poetry comes from the same root as poiesis. Haraway was really referencing the whole theory of autopoiesis from [Humberto] Maturana and [Francisco] Varela. She was shifting the emphasis from autopoiesis to sympoiesis. 

You can think of the individual as a system. And in systems theory, the individual exists within the environment. And, of course, that’s where evolutionary biology sort of started. Yes, it’s the dynamics between the individual and the environment. But if you dissolve that boundary altogether; if you say [that] it’s impossible to draw a boundary around anything and say “this is the system and over here we have the environment,” you’ve created a mode of thought which ignores all the ways in which species do act as individuals. 

Damien Roach, Zoning (ii), 2024. Courtesy of the artist

Each of us, of course, thinks that we’re an individual. We think we make decisions. Well, evolutionary biology has shown that all the ways in which we think we make individual decisions are, in fact, heavily influenced by symbionts. Let us take an extreme example. If Anna Karenina decides to throw herself in front of a train, that’s highly detrimental to all her symbionts because they’re going to die with her — so you have to say she makes that decision as an individual. There are all kinds of ways in which we do act as individuals. There are also all kinds of ways in which we are influenced by our symbionts. You cannot say there are no boundaries you can draw around an organism. It’s so counterintuitive, so counter to our everyday experience that I think it’s necessary to find some kind of midway position where we recognize [that], yes, sometimes we act as individuals [and] sometimes we act as holobionts. That is where Haraway’s work really comes in.

JD: How do you think about theories like that when you’re carrying out research for a book like this? 

NKH: I have always felt an internal resistance to becoming a disciple of anybody. Maybe that has to do with the fact that I’m a woman, and women are famous for their subservient position to men. I didn’t want to fall into that role. But also I think it is just something about my personality that resists disciplehood.

There are fierce disciples of Lacan, of course, [and] of Marx — you name the theorist and you can find a whole host of people who claim to be their disciples. I’m rather of the persuasion that theories are tools that we use to think in more sophisticated ways about our lives, about the kind of things that we encounter personally as a culture and as a society. And like tools theories can be adapted. I don’t take the position that if you’re going to talk about capitalism, you have to conform precisely to what Marx thought capitalism was. Or if you’re going to talk about the unconscious, you have to conform exactly to what Lacan thought the unconscious was. Theories are out there and we can make use of them, and we can modify them. We can take some parts of them that seem adaptive to the purpose and reject other parts of them. I’m not a purist when it comes to theory. I’m a bricoleur. I’m kind of a tinkerer [and] an engineer, and that’s what I’ve done in this book.

Damien Roach, Afterimage (ii), 2024. Courtesy of the artist

JD: In Bacteria to AI you talk about the shift from GPT-3 to the causality that’s built into GPT-4. Of course, now the big conversation is around reasoning and deep research

NKH: You’re absolutely right. There are parts of the book that are already obsolete in that sense. One of the big developments in AI since I wrote the book is the discovery that a lot of these large language models are using heuristics, not reasoning as such. That is a huge discovery because it means that they are taking the huge mass of data that they’ve ingested in their training and have begun to develop some quick and ready rules that work well enough. It is still common practice in literature departments to teach students how to do close reading, [which] really dates back to the 1950s and what was then called the New Criticism. Close reading assumes that every word of a text is chosen for a specific reason and therefore that texts have specific meanings in the vocabulary and rhetoric that they employ. But an LLM is not choosing a precise word. It’s choosing a probabilistic word — so the whole literary practice of close reading is now kind of drawn into question if you’re dealing with AI texts and not with human-authored texts. 

Our understanding of how literature works and even of how language works is based on embodied practices of human existence. One of the fundamentals of those embodied practices is the necessity to breathe in and out and the necessity to link one’s breathing in and out with the articulation and formation of words [….] Now you have an entity that is producing human natural language but has no need to breathe in and out and no understanding of breath, emphasis, or of how you coordinate breathing in and out with the articulation of words. Consequently, it has no idea of what the articulation of words does for our emotions. 

There is a deep connection between affect and the words that we articulate. In fact, it is something I learned when I changed fields, that to understand a poem, you have to either vocalize it or subvocalize it because poetry is a supercharged literary language. If you don’t vocalize and you don’t subvocalize, you lose this capacity of literary language to supercharge each word. How do you convey that to an entity that doesn’t breathe or articulate and has no emotions? As I say in my essay, [it’s like] communicating with aliens.

How do you negotiate the interface between human embodied practice and an entity that has a completely different form of embodiment?
Damien Roach, Artefact 7 (Denisyuk) [i], 2024. Machine learning model trained on Denisyuk holograms. Courtesy of the artist

JD: I used deep research in preparation for this interview to see what questions it would have me ask you. Here is one: the intersection of literature and technology is central to your work. Are there any narratives, fictional or otherwise, that you think best capture the stakes of AI and cognition today?

NKH: In Bacteria to AI, I discuss three novels, but really you’d have to be thinking about novels that are going to be written next year to fully answer that question because the technology is moving at such a rate. The three that I analyzed were Annalee Newitz’s Autonomous (2017), Kazuo Ishiguro’s Klara and the Sun (2021), and Machines Like Me (2019) by Ian McEwen. All of those figure conscious robots. So if GPT-4.5 is closer to a conscious entity than anything we’ve had to date, we may not be so far, as you say, from actually beginning to develop conscious synthetic intelligence. 

An embodied conscious synthetic intelligence such as a conscious robot would be yet another further step along. Of course, if you had an embodied entity in a robot body, you would have the advantage of all the sensory information that so far has not been available to LLMs. What kind of fiction would come out of that? I don’t know. 

In my most recent book, I’ve been entertaining the possibility that LLM creativity may lead to a completely new kind of aesthetic, which is not human-based. 
Damien Roach, Afterimage (iv), 2024. Courtesy of the artist

To date, all of our aesthetic theories are based on human embodied practices, but there may be ways to use language that would be radically different from anything we have now and would adhere or emerge from a completely different kind of aesthetic sense. I spent a few days reading AI novels that are available in Kindle Unlimited and so forth, and these are novels that, from a human point of view, make no sense at all. They are not coherent as long narratives [nor] even coherent as paragraphs. They are radically disjunctive. 

One [novel] is called Dinner Depression (2019) [which] is kind of typical of this whole range of novels. But confronting something like Dinner Depression, you have to begin to ask whether there is an aesthetic by which you could say [that it] is a better or a worse novel than another novel of a similar kind. I think there might very well be, but we as humans have really no insight into what that aesthetic would be and how that aesthetic would operate. 

We might be beginning to collect samples of that, from which someone could deduce a completely non-Aristotelian sense of narrative. What would it look like? Well, it would not have the Aristotelian narrative arc of a beginning, middle, and an end. It would not have long-term coherence. It would not have coherence even at the level of sentences. What it would have is an ability to use language which is emotionally complex and highly disjunctive. There have been experiments of course in human literature on disjunctive texts, I’m thinking of something like The Bald Soprano by French playwright Eugène Ionesco or something along those lines where it makes no sense, you know: Player A says something [and] Player B responds with a complete non sequitur — that kind of dialogue.

Those are not what I have in mind. What I have in mind is something that is radically disjunctive, put together through probabilistic connections, not through logical or emotional connections, but uses highly rhetorically infused language. To even admit the possibility that something like this could exist is a radical step. And I’m sure there are many literary theorists who would say that it is nonsense, that there can be no such radically nonhuman aesthetic. I’m not sure that that’s correct. 

I’m not sure that there could not be a radically nonhuman aesthetic.
Damien Roach, Locus (iii), 2024. Courtesy of the artist

JD: Your book How We Became Posthuman has influenced how so many people think about posthumanism and the human project. As you’re talking, it strikes me that what we’re seeing with the creep of authoritarianism around the world is a very intense backlash against the attempt to reframe what the human is. How are you thinking about posthumanism now? 

NKH: When I wrote How We Became Posthuman, there had been previous discourses about the posthuman; I wasn’t the only one to begin to think in those terms. But I think it has been an influential book. What I was trying to do was not to say what the posthuman should be; I was trying to say what it is turning out to be. The book has, at its core, a profound ambivalence. On the one hand, to try to describe what I saw actually happening. And then a much more minor note: how we could interpret what was happening in moving toward an idealistic version of the posthuman. 

I was just referencing the other day the part in the book where I say, if my nightmare is people who regard their bodies as “fashion accessories,” my dream is that people will embrace the idea that they are embodied, embedded, enacted, and so forth, and begin to make connections between their actual body, bodily existence, their environment, and where we want to go as a species.

Damien Roach, Zoning (iii), 2024. Courtesy of the artist

JD: Is there anything we haven’t covered that you’d like to highlight?

NKH: The one thing we didn’t talk much about was this idea of reversible internalities. My idea is that, as our sense of relationality has increased, framing has come front and center. It’s really important to understand the frames by which we evaluate things. In the book, I looked at Jennifer Gabrys’s Smart Forests Atlas project in two different ways. 

People are putting digital sensors and digital computational devices in forests to measure forest processes. That is putting the forest within the digital. But you can also put the digital within the forest and say that the forest is a complex analog computational device. 

You get very different ethical imperatives if you put the forest in the digital as compared to putting the digital in the forest. The frame you choose is always multidimensional. There is always more than one frame to think about any problem. And therefore, your choice of frames has ethical and political dimensions. The choice of a frame is never neutral. 

🎴🎴🎴
Protect your NFT collection and discover new artists with ClubNFT

This interview has been edited for clarity and length. An extended version of this conversation was first released as an Urgent Futures podcast.

N. Katherine Hayles is the Distinguished Research Professor at the University of California, Los Angeles, and the James B. Duke Professor Emerita from Duke University. Her research focuses on the relations of literature, science, and technology in the 20th and 21st centuries. Her twelve print books include Postprint: Books and Becoming Computational (2021), Unthought: The Power of the Cognitive Nonconscious (2017) and How We Think: Digital Media and Contemporary Technogenesis (2015), in addition to over 100 peer-reviewed articles. Her books have won several prizes, including The Rene Wellek Award for the Best Book in Literary Theory for How We Became Posthuman: Virtual Bodies in Literature, Cybernetics and Informatics, and the Suzanne Langer Award for Writing Machines. She has been recognized by many fellowships and awards, including two NEH Fellowships, a Guggenheim, a Rockefeller Residential Fellowship at Bellagio, and two University of California Presidential Research Fellowships. She is a member of the American Academy of Arts and Sciences. Her latest book is Bacteria to AI: Human Futures with our Nonhuman Symbionts from University of Chicago Press.

Jesse Damiani is a writer, curator, and foresight strategist. He is Senior Curator and Director of Simulation Literacies at Nxt Museum, Adjunct Assistant Professor in USC’s Media Arts + Practice program, and an Affiliate of the metaLAB at Harvard and Institute for the Future. He is also Arts & Culture Advisor for Protocol Labs, the Host of Adobe’s Taking Shape, a hub for 3D art and design, and the author of I Create Like the Word: Poetry in the Age of Machine Intelligence (2026). For many years he edited the Best American Experimental Writing anthology (Wesleyan University Press). Recent curated exhibitions include “SMALL V01CE” at Honor Fraser Gallery, “Lilypads: Mediating Exponential Systems” at Nxt Museum, and “PROOF OF ART” at Francisco Carolinum Linz. Damiani is also the founder of Postreality Labs, a foresight consultancy in Los Angeles, where he advises organizations on navigating the polycrisis through resilience, adaptation, and futures literacies, and through which he produces the Reality Studies newsletter and Urgent Futures podcast.

Damien Roach is a London-based artist, designer, musician, and lecturer. His projects span art, design, and creative direction, publishing, sound/music and audiovisual. Currently undertaking a PhD at the Royal College of Art, his research project “Acid Realism” is focused on machine vision and nonhuman planetary perspectives. In 2023, under the name patten, he released the first album made entirely from text-to-audio AI samples. Recent projects include immersive AV performances at London’s ICA and Tate Modern, design for clients ranging from Caribou to Disney, and publishing a journal exploring non-dystopic future visions. He has exhibited internationally, including at the 51st Venice Biennale, “Learn to Read” at Tate Modern, “Art Now” at Tate Britain, “Housewarming” at Swiss Institute, New York, and solo presentations at institutions including The Roberts Institute of Art and Gasworks, London; Kunst Halle Sankt Gallen, St. Gallen; Arnolfini, Bristol; and Neuer Aachener Kunstverein, Aachen.