Grab your copy of the Right Click Save book!
Histories
March 30, 2026

The Trouble with Terminology

Artists are best placed to define the language of digital art because they are closest to what programs do, argues Paul Cohen
Credit: Harold Cohen, Untitled, 1982. Coloured Dye over ink on paper. Courtesy of Gazelli Art House & Harold Cohen Trust
Now Reading:  
The Trouble with Terminology
“The Trouble with Terminology” is part of a special series of three essays commissioned by Right Click Save from the distinguished computer scientist and son of Harold Cohen, Paul Cohen, dedicated to the language of digital art. Read his other essays “On Creativity in Digital Art” and “Harold Cohen’s Freehand Line Algorithm”.

This article is about troublesome words and how we use them. They are but a sliver of the lexicon of art, so perhaps we should ignore them and hope they go away. But no, we keep inviting them back, like a family of skunks. 

Examples of troublesome words include aesthetic, autonomy, collaboration, creativity, emergence, generative, intention, meaning, originality, personality, style, and so on. Oh boy, here comes trouble.

Words take on meanings through their use in communities of practice, and all communities borrow words and change their meanings. But some meanings are very slippery or entirely illusory. One attribute of the skunks is that everyone knows what they mean, when in fact no one does. Consequently, two things can happen when a skunk migrates from art into digital art: its meaning can become even more slippery and illusory, or its new digital context can increase its precision.

Harold Cohen, Untitled (i23-3350), 1970. Unique calcomp plotter drawing on paper. Courtesy of Gazelli Art House & Harold Cohen Trust

This article is about how to ensure that the migration of troublesome words into digital art expands our knowledge and understanding of whatever these words denote. Digital artists embrace computation but often don’t explain art-making in computational terms. “Not my job” is an understandable response, but whose job is it? We can’t count on critics, art historians, and philosophers because they aren’t artists and generally don’t know much about computation. 

Artists who work with computers are the most qualified people to propose computational accounts of aesthetics, creativity, intentionality, and other phenomena denoted by troublesome words. Here’s how: when you use a troublesome word, define it in terms of the objectively observable behaviors of your system.

Harold Cohen applied this strategy effectively to craft his ideas about creativity, autonomy, emergence, personality, and other troublesome words. 

Harold Cohen with SGI System, Boston Computer Museum, 1995. © Hank Morgan. Courtesy of Hank Morgan, Harold Cohen Trust, and Gazelli Art House

In my article on Harold’s Freehand Line Algorithm, I discuss his use of mechanistic language to delineate a “feedback-driven simulation” of human drawing to create an “illusion of intentionality”, as well as how he ended up dropping the words simulation and illusion entirely as his program AARON’s behavior became increasingly non-human. In a further article, on creativity, I discuss the behaviors that Harold considered to be creative. Here, though, I will highlight a rare failure of his writing about his relationship with AARON:

“Creativity — this particular example of creativity — lay in neither the programmer alone nor in the program alone, but in the dialog between program and programmer; a dialog resting upon the special and peculiarly intimate relationship that had grown up between us over the years.¹ (Harold Cohen)

There’s a tiresome familiarity about this excerpt. When you don’t say what troublesome words mean, you open the door for others to interpret them in any confused, nonsensical way they like. What kind of dialog did Harold and AARON engage in? What was special and intimate about their relationship?

For some reason, human-machine interactions are described in the language of enchantment and mystery. 
Harold Cohen, Untitled (i23-3566), 1974. Unique plotter drawing in ink on paper. Courtesy of Gazelli Art House & Harold Cohen Trust

In fairness to Harold, fewer than ten paragraphs out of 1,000 mention “dialog” or “collaboration”. But none explains these words in terms of Harold’s and AARON’s abilities, behaviors, and responsibilities. In contrast, consider how Harold describes the development of AARON as a colorist:

“[I]f AARON didn’t have the hardware upon which my own expertise rested, then the standard expert system approach of emulating my own expertise was a non-starter. I needed to build a system based on the resources AARON did have, which included [...] an entirely un-human ability to build and maintain an internal model of [an] arbitrarily complex color schema. I needed to devise a set of rules flexible enough and robust enough to apply across the full range of unpredictable compositions that the program was capable of generating.”²

And then,

“The program had, in a single step, become an expert colorist in its own right. [...] I couldn’t see why it worked as well as it did, and in the following years I found I was unable even to describe it without going back and reviewing the code I’d written.”

As a result of objectively described constraints and developments, AARON — no longer an emulation, now an expert — became quite autonomous of Harold with respect to color. That’s how a “relationship” between an artist and an art-making program should be presented.
AARON’s autonomy is due partly to its “un-human” ability to plan an entire color scheme and hold it in memory, without visual feedback. Harold Cohen, Machine Painting Series TCM #7, 1995. Dyes applied by Cohen’s Painting Machine to paper. Courtesy of Gazelli Art House & Harold Cohen Trust

Troublesome words have entailments that we infer unconsciously when we hear them. Entailments of “collaborate” are that your program shares your goals; it wants to help; you spend a lot of time together; it knows what you are doing, your relationship is roughly symmetrical; it communicates with you; etc. Generally, these inferences overstate your program’s abilities. Sometimes they aren’t what you intended to say. The best way to defeat unintended entailments is to bind the meanings of words to objectively observed behaviors of your program.

Perhaps “collaboration” and “dialog” are apt descriptions of the 48-year span of Harold’s work with AARON. Their relationship was almost epistolary: Harold would write some code, AARON would reply with some pictures. 

Perhaps this human-machine relay is what all artists mean by “collaboration”. More likely, artists have quite different interpretations of the word. We won’t know unless artists say what they and their programs do.
Harold Cohen with painting-machine painting, Boston Computer Museum, 1995. © Hank Morgan. Courtesy of Hank Morgan, Harold Cohen Trust, and Gazelli Art House

The most troublesome words of all denote mental states such as wanting, intending, worrying or traits such as boldness, subtlety, etc. Humans have always assigned mental states to things that don’t have them — to trees, rivers, spirits, as well as machines. 

The philosopher Daniel Dennett proposed that this intentional stance is a reliable way to explain behaviors that are too complicated or obscure to explain in any other way.⁴ But if the aim is computational accounts of mental states in digital art, then the intentional stance is too permissive. It is too easy to say that a program wants to help; that its boldness is evident in its drawings; that only a powerful creative tension could produce such color combinations. Without objective groundings in what programs do these assertions don’t mean much.

The issue here is not whether machines have mental states. That debate grows more complicated by the day. The issue is whether we can find a stance, a way to use mentalistic language, that advances our understanding of digital art.
Computational color skill need not be identical with human color skill, so it is important to say what computational colorists do. Harold Cohen Painting Machine — close-up of brush painting, Boston Computer Museum, 1995. © Hank Morgan. Courtesy of Hank Morgan, Harold Cohen Trust, and Gazelli Art House

In my view, artists should define troublesome words in terms of their program’s behaviors. I call this a constructive stance because the meaning of boldness (or collaboration or creative tension) is fixed by a construction, namely the program and its behaviors. Whereas the intentional stance is noncommittal about whether programs actually have mental states, the constructive stance asserts openly that when a program runs, boldness is generated.⁵  

Note that the constructive stance does not say that computational boldness is identical to human boldness, any more than AARON’s color expertise was identical to Cohen’s.

The constructive stance focuses on the behaviors of programs. For “boldness” we might say a program “generates broad strokes in bright colors across the canvas” or, going a little deeper into the methods that generate boldness, we might say the program “selects from the upper end of a distribution of color saturation values”. 

All the constructive stance requires is objective descriptions of what programs do (for example how Harold’s Freehand Line Algorithm generates “intentional” lines). By all means claim boldness for your program, but say what the program does. Later on, when someone else writes a different program to generate boldness, the meaning of the term will become constrained by another construction. Eventually, someone will discern the commonalities among programs that generate boldness and will express this abstraction in more or less formal terms. When this happens the meaning(s) of boldness will derive from the abstraction.

This is precisely what happened to “learning” when it fell into the hands of AI researchers. We constructed hundreds of algorithms for binary classification, pattern recognition, predicting future states, and many other classes of tasks. We started to recognize common behaviors — some of them emergent — and gave them names such as overfitting, the curse of dimensionality, and the bias-variance tradeoff.

Harold Cohen, Untitled (i23-3547), 1971. Silkscreen on paper. Courtesy of Gazelli Art House & Harold Cohen Trust

We invented mathematical formulations that helped us to distinguish different kinds of learning. Before the ascendance of AI in the 1950s and ’60s, the literature on human and animal learning had its own troublesome words, including generalization, transfer, similarity, chunk, and concept. When these words moved into a digital context, they took on precise meanings in the literature of machine learning. We can do the same for troublesome words in art.

🎴🎴🎴

With thanks to Alex Estorick, who conceived, commissioned, and edited this series.

Paul Cohen is a professor of Computer Science at the University of Pittsburgh and the CEO of Causerie.AI, which extracts knowledge from text at scale. Prior to becoming the Founding Dean of the School of Computing and Information at Pitt in 2017, he was a program manager in DARPA’s Information Innovation Office, where he designed and managed the Big Mechanism, Communicating with Computers, and World Modelers programs. He worked at DARPA under an IPA agreement with the University of Arizona, where he founded the School of Information: Sciences, Technology and Arts, now the School of Information. His research is in aspects of artificial intelligence and cognitive science, with interest in how language, communication, and AI methods can foster understanding of highly complicated systems such as cell signaling pathways, biophysical, and socio-economic systems. He is the son of the artist Harold Cohen.

___

¹ H Cohen, “Driving the Creative Machine”, Paper presented at Orcas Center, Crossroads Lecture Series, September, 2010, 9.

² H Cohen, “AARON, Colorist: from Expert System to Expert”, Paper presented at University of California, San Diego, October, 2006, para. 47.

³ H Cohen, “Driving the Creative Machine”, 8.

⁴ DC Dennett, “Intentional Systems”, The Journal of Philosophy, Vol. 68, no. 4, February 25, 1971.

⁵ “[T]he definition of intentional systems I have given does not say that [they] really have beliefs and desires, but that one can explain and predict their behavior by ascribing beliefs and desires to them…” Ibid., 195.