Grab your copy of the Right Click Save book!
Histories
March 30, 2026

On Creativity in Digital Art

50 years on from Harold Cohen’s treatise on creativity, no one has yet built a program that is creative in the way he imagined
Credit: Harold Cohen, Machine Painting Series TCM #21 (detail), 1995. Dyes applied by Cohen’s Painting Machine to paper. Courtesy of Gazelli Art House & Harold Cohen Trust
Now Reading:  
On Creativity in Digital Art
“On Creativity in Digital Art” is part of a special series of three essays commissioned by Right Click Save from the distinguished computer scientist and son of Harold Cohen, Paul Cohen, dedicated to the language of digital art. Read his other essays on “The Trouble with Terminology” and “Harold Cohen’s Freehand Line Algorithm”.

How should we discuss creativity in digital art? In my article on terminology, I propose a constructive stance: go ahead and attribute creativity to your program, but tell us what your program does that warrants the attribution. Define “creativity” as an observed behavior produced by your code. Alternatively, assess the behavior of your code and say which aspects of creativity it fails to realize, or only partly implements.

You can see the constructive stance at work in Harold Cohen’s writing about creativity, which also considered autonomy, originality, personality, and other troublesome words. 

Harold exhibited AARON widely in the 1970s and he was probably the first artist to be peppered with questions such as: who is driving the plotter? Who told AARON what to draw? How did it decide to put that big thing over there? Is it art?
Harold Cohen Painting Machine, Boston Computer Museum, 1995. © Hank Morgan. Courtesy of Hank Morgan, Harold Cohen Trust, and Gazelli Art House

In a 1976 essay, titled “The Material of Symbols”, Harold developed an answer to questions about creativity:

“Most searching questions about the nature of the machine turn out to be questions about the nature of people, and this one is no exception. Before we could venture a more complete answer we would need to consider what we really mean by creative behavior, for if that is to be judged exclusively in terms of the manifest results of its exercise — we know so-and-so is creative because he makes a great many original images — then clearly the machine is extremely creative. Its drawings are probably as good, as original, as any I ever made myself, and I am hopelessly outclassed by it in terms of productivity.”

But Harold recognized that originality — by which he meant making unique images — was not the same as creativity:

“The program does not develop new game-states: it plays the legal moves in the current game. It says ‘Let me tell you about my world’, but rich though that world may be, the telling does not result in any further enrichment. We thus have no reason to say that the machine has any interest in the one feature I have chosen to regard as fundamental to human art-making — the continuous development of the internal representation of the world.”

Fifty years on, these arguments might sound familiar. A digital art-making system produces an endless stream of original works, each of which has a strong family resemblance to other works in the stream. The system cannot do otherwise. To see a new kind of image or “family,” the system must change.
A digital art-making system produces an endless stream of original works, each of which has a strong family resemblance to other works in the stream. Harold Cohen, Untitled (i23-3923), 1977-1982. Colored dyes over lithograph on paper. Courtesy of Gazelli Art House & Harold Cohen Trust

The question is, how can a system change? Around 2010, when Harold was asking this question about AARON, he knew that machines could learn and even modify their own code. However, machine learning at the time required data that identified features of training examples. Harold could not see any way to identify features that would account for his judgments about AARON’s work:

“As I’ve indicated, AARON’s performance involves a relatively large number of controlling variables. Their values can be recorded and considered, but they don’t constitute a useful description of the images to which they give rise. I have access to the images; AARON has access only to the numbers that produced them. Does that mean that I need to devise a descriptive scheme for its images that corresponds to what I see, that AARON can understand? And do I then need to correlate different combinations of variable values with my own assessment of the images to which they give rise? Is that actually possible?”¹

If this feature engineering problem were not daunting enough, AARON also presented a credit assignment problem. When a program takes many decisions or actions on its own, it can be difficult to figure out which of these is responsible for more or less successful outcomes. Which, in a sequence of compositional or color choices, makes an AARON drawing more or less engaging?

Engagement itself is a problem. Machine-learning problems usually have objective measures of success, but art does not. For Harold, the principal problem was that “AARON’s work is intended for human use and its criteria must consequently reflect what the human viewer responds to in an image.”²
Harold Cohen, Untitled (i23-3451), 1969. DITRAN output and coloured felt tip on paper. Courtesy of Gazelli Art House & Harold Cohen Trust

But suppose that Harold could engineer features that represent images and AARON could figure out which of its many decisions and actions were responsible for generating “bad” images — and there were criteria for what makes art engaging — what should a program do to improve its performance? 

Harold was well aware that programs can have unexpected behaviors that emerge from the interactions between different chunks of code. Writing about the personality so often attributed to AARON he says, “I know of nothing in the program to account for it. To put the problem another way, I would not know how to go about changing the program to project a different ‘personality’.”³

If Harold couldn’t see how to alter AARON’s emergent properties, it’s not surprising that he gave slim odds to AARON changing itself.
Harold Cohen Painting Machine — close-up of brush painting, Boston Computer Museum, 1995. © Hank Morgan. Courtesy of Hank Morgan, Harold Cohen Trust, and Gazelli Art House

In the years since Harold evaluated and rejected machine learning, the field has made progress. Manual feature engineering has been replaced by automated deep learning. The credit assignment problem can be addressed by reinforcement learning, and the two problems can also be tackled jointly by deep reinforcement learning. Today, programs can write and test other programs so quickly that the causes of emergent properties can probably be tracked down without too much effort. Given sufficient data and computing resources, art-making programs probably can “learn to do better”. But this alone would not make them creative:

“[...F]or anyone involved in writing a creative program the distinction between a rule and what implicitly informs the rule, between a predicate and a criterion, is critical. [....]

[T]he limit on a program’s creativity is not determined ultimately by its ability to modify its own code, but by its ability to modify its own criteria.”⁴ (Harold Cohen)

Nothing here depends on the form of the program, on whether it is a production system or a neural network or something else. The thrust of Harold’s critique is that code is merely the last step in a chain of deliberations about what to do, how to do it, and why it is important. These deliberations happen in the minds of artists, not in the minds of their programs. 

It would be an impressive technical feat for art-making programs to modify themselves, but if the programs don’t have their own criteria or if they cannot change their criteria, then they are not creative.
Harold Cohen Painting Machine — Harold taking notes, Boston Computer Museum, 1995. © Hank Morgan. Courtesy of Hank Morgan, Harold Cohen Trust, and Gazelli Art House

Harold’s position, no doubt influenced by his long and diverse history as a human painter, was even stronger:

“The central question for me, then, is not whether a program can self-modify in order to satisfy internal criteria; it is whether enough of that chain of criteria can ever be internal to a program for it to manifest the self-directed development we expect of human artists.”⁵

In short, Harold viewed creative behavior in machines as self-modification of internal criteria related to internal representations of the world. He wasn’t worried about external representations because AARON clearly could make engaging images. 

New internal representations — such as the “skeletons” that enabled AARON’s figurative phase — were evidence of creativity at work, and Harold’s real target was the driver of these representations, the voice that says: “this is how I want to represent the world.” He envisioned the criteria for these representations changing slowly, supported by a long history of changes, as they do in human artists and art history.

Internal skeletons for figures were initially developed by Harold around 1980 and became increasingly complex as AARON’s drawings became more figurative. It is hard to imagine how AARON might have made this creative move by itself. Courtesy of the Harold Cohen Trust

To the best of my knowledge, nobody has built a program that is creative in this sense. Humans continue to guide the development of art-making programs by programming or prompting or voting for images

Eventually, Harold came to think of himself and AARON as a kind of creative couple: “I believe that my dialog with AARON is an example of machine creativity, albeit a small one.”⁶ And while this seems a significant retreat from his original vision of machine creativity, Harold’s own creativity was undoubtedly amplified by his work on AARON.

Was Harold satisfied with this arrangement? He wasn’t shy about what he wanted, yet I can find no explicit statement in his writing that he wanted AARON to be creative. 

While he was deeply perceptive about the difficulties involved, he neither tackled these problems nor seemed to want to. Perhaps he was cautious about the prospect of a self-directed AARON, just as he had been rattled by a successful new drawing algorithm:

“The very success of the program in fact led to the biggest personal crisis for me in many years. I’d spent all those years trying to increase the autonomy of the program; it could already do all its coloring without my intervention, now it could do all its drawing too. [...] I felt that my dialog with the program, the very root of our creativity, had been abruptly terminated.”⁷

Harold Cohen Painting Machine — close-up of brush painting, Boston Computer Museum, 1995. © Hank Morgan. Courtesy of Hank Morgan, Harold Cohen Trust, and Gazelli Art House

For whatever reason, AARON never achieved creativity of the kind Harold discussed; so where does this leave the project of building creative programs? Despite the difficulties Harold exposed, he was right to focus on changing criteria rather than code, and I wouldn’t bet against his vision of self-modification of internal criteria related to internal representations of the world. Programs already modify programs. The harder part for machines, as for all artists, will be the continuous self-directed development of internal representations of the world.

🎴🎴🎴

With thanks to Alex Estorick, who conceived, commissioned, and edited this series.

Paul Cohen is a professor of Computer Science at the University of Pittsburgh and the CEO of Causerie.AI, which extracts knowledge from text at scale. Prior to becoming the Founding Dean of the School of Computing and Information at Pitt in 2017, he was a program manager in DARPA’s Information Innovation Office, where he designed and managed the Big Mechanism, Communicating with Computers, and World Modelers programs. He worked at DARPA under an IPA agreement with the University of Arizona, where he founded the School of Information: Sciences, Technology and Arts, now the School of Information. His research is in aspects of artificial intelligence and cognitive science, with interest in how language, communication, and AI methods can foster understanding of highly complicated systems such as cell signaling pathways, biophysical, and socio-economic systems. He is the son of the artist Harold Cohen.

___

¹ H Cohen, “AARON, Colorist: from Expert System to Expert”, Paper presented at University of California, San Diego, October, 2006, para. 47.

² H Cohen, “Decoupling Art and Affluence”, Paper presented at Lisp Users Annual Conference, Seattle, 2001.

³ H Cohen, “What is an Image?” Paper presented at University of California, San Diego, 1979, 20.

⁴ H Cohen, “A Self-defining Game for One Player” Paper presented at Loughborough Conference on Cognition and Creativity, October 1999.

⁵ H Cohen, “Decoupling Art and Affluence”.

⁶ H Cohen, “Driving the Creative Machine”, Paper presented at Orcas Center, Crossroads Lecture Series, September, 2010, 16.

⁷ Ibid., 12.