roundtable

Art & AI

Prompted by Allison Parrish
Event February 9, 2022 at Online
Topic Tags Technology

The simple juxtaposition of "art" and "AI" in a headline reliably provokes hand-wringing frissons of futurity. Yet artificial intelligence is now pervasive not just in everyday digital technology (search engines, social media feeds, mobile phone cameras, word processors) but also in the software tools of many artists (image editing, 3D animation, music production, etc.). The evergreen claim that art and AI are fundamentally incompatible both obscures the long history of intersections between the arts and computation over the past century, and also inhibits potentially valuable criticism of how AI is deployed in the arts today. So it’s worth considering: why does the idea of artists making use of AI seem perpetually new, and seem to so completely capture the imagination of pop culture and academia alike whenever it arises?

A discussion about art and AI must be, at some level, a discussion about the relationship between artists and the techniques and tools they use to create art. The paintbrush at least partially determines the appearance of the painting, but most painters would not consider the paintbrush to be a "collaborator" in the work of art. On the other hand, some artists strive to give up all control of the art-making process to rule-based or stochastic processes, as a way of absolving themselves from any claim to authorship over the artwork. Do tools that make use of artificial intelligence tend to fall on one end of this spectrum or the other? Is AI uniquely capable of being something other than a mere tool in the art-making process? Can an AI process be a collaborator, or even an agent, to which full authorship can be attributed? Are the current crop of emerging artificial intelligence technologies—e.g., generative adversarial networks (GANs) and large pre-trained language models like GPT-3—qualitatively different from those that came before in this regard?

Technologies incorporating artificial intelligence—and in particular, machine learning—have been rightly criticized for perpetuating inequity. Machine learning models are trained on data, and every dataset carries with it the biases and worldview of those that collected it. When machine learning models are deployed in the real world, their predictions perpetuate these frozen worldviews in a feedback loop, the effects of which we can see in everything from shopping recommendations to algorithmic sentencing. Is artificial intelligence, then, inherently conservative? Can it only ever maintain the status quo, speak words that have already been spoken? Much of the research on artificial intelligence—and indeed, artificial intelligence and the arts—is funded by large corporations like Google and Facebook, which benefit from the public’s perception of AI as harmless and objective. Does art made with cutting-edge artificial intelligence techniques primarily serve as a way of "rehabilitating" these potentially harmful technologies in the public eye?

chevron-down