Interviews

Trevor Paglen

Trevor Paglen, Sight Machine, 2017. Performance view, Pier 70, San Francisco, January 14, 2017. Kronos Quartet. Photo: Joshua Brott, Obscura Digital.

Trevor Paglen is the first artist-in-residence at the Cantor Arts Center at Stanford University. The exhibition “The Eye and the Sky: Trevor Paglen in the Cantor Collection” places his photographic series of predator drones, “Time Study (Predator; Indian Springs, NV),” 2010, alongside photographs by artists such as Eadweard Muybridge, Edward Steichen, and Eve Sonneman from the Cantor’s permanent collection. Earlier this year, the Cantor also commissioned Paglen’s multimedia performance Sight Machine. Below, he discusses issues of surveillance in the show, which is on view through July 31, 2017, as well as in the performance. On July 25, 2017, Paglen will participate in a panel discussion on civil liberties in the age of hacking at the Solomon R. Guggenheim Museum in New York. His exhibition “A Study of Invisible Images” opens at Metro Pictures in New York on September 8, 2017.

MY TIME AT STANFORD has centered around a development in imagemaking that I think is more significant than the invention of photography. Over the last ten years or so, powerful algorithms and artificial intelligence networks have enabled computers to “see” autonomously. What does it mean that “seeing” no longer requires a human “seer” in the loop?

This past January, the Cantor commissioned Sight Machine, which I produced in collaboration with the Kronos Quartet. While the musicians performed selections by Bach, Raymond Scott, Laurie Anderson, and Terry Riley, among other composers, they were surrounded by cameras that all fed video into a rack of computers. The computers were programmed to run a large range of computer-vision algorithms, such as those used in self-driving cars, guided missiles, face detection and recognition software, and artificial intelligence networks used by Facebook, Google, and other companies to interpret images. While the Kronos Quartet played music, a projection behind them showed them as they looked to the array of algorithms watching them.

At one time, to surveil implied “to watch over,” and to survey was basically “to look.” Between these two definitions we get a sense of how photographs can be manipulated for multiple aims. Eadweard Muybridge’s Sunset over Mount Tamalpais, 1872, which gives you a vantage point to look at the Northern California landscape, is also a document of the move toward geopolitical dominance. That work is in “The Eye and the Sky,” and Muybridge has been on my mind for some time. My photographic series in the show, “Time Study (Predator; Indian Springs, NV),” is made up of albumen prints of predator drones. They relate to Muybridge because they deal with conventions that we take for granted in landscape photography. During the residency, I worked with computer-vision and artificial intelligence students and researchers to further explore the largely invisible world of machine-to-machine seeing. We not only developed software that allowed us to see what various computer-vision algorithms see when they look at a landscape, but also were able to implement software that could be used in conjunction with artificial intelligence to “evolve” recognizable images from random noise—almost like a hallucination or the phenomenon of pareidolia, in which one sees faces in shapes such as clouds.

To “teach” AI software how to see various objects, you have to use enormous training sets of data. For example, if you want to build an AI program that can recognize pencils, keyboards, and cups, you need to give it thousands of pictures of each object. The AI technology teaches itself how to see the differences between these objects during a training phase of the software development. The libraries of the thousands of images you use to train an AI project are called training sets.

The implicit biases and values built into various training sets can have enormous consequences, and there are numerous examples of training sets creating AIs that reflect the unacknowledged forms of racism, patriarchy, and class division that characterize so much of society. A Google AI program described an African American couple as “a pair of gorillas,” while other AIs technologies routinely assume that doctors are male and nurses are female. Indeed, in AI-based gender-recognition algorithms, subjects are invariably described as either “male” or “female”—the concept of nonbinary gender identities is utterly alien.

This brings me to what I am really fascinated by: a panoramic looking, or bird’s-eye view, that you get with nineteenth-century landscape photography and that you begin to see manifested in the twentieth century as surveillance by machines. In the twenty-first century it involves total machine capture. At Stanford, we started developing training sets based on taxonomies from literature, psychoanalysis, political economy, and poetry. We built an AI program that can only see scenes from Freud’s The Interpretation of Dreams and another that can only see monsters associated with metaphors of capital such as vampires and zombies. Another one is trained to see “American predators,” from Venus flytraps to predator drones. With this body of work, I wanted to point to some of the potential dangers associated with the widespread deployment of AI and other optimization technologies.

In AI there are enforcement mechanisms that are even harder to discern. We are training machines in patriarchal histories or racist histories, etc. We know gender is fluid and race is a construct, but that is not the case with machine categorization. There is an assumption that the technology is unbiased, but it is not. These are not merely representational systems or optimization systems; they are set up as normative systems and therefore they become enforcement systems. The project to redefine the normal human is a political project. The contestation of those categories is essential before they become hard-coded into infrastructure. Sight Machine and my photographs included in “Time Study” address machine vision and the invisibility of these repressive visual regimes.

Read Trevor Paglen’s 1000 Words in the March 2009 issue of Artforum here.

ALL IMAGES