AI image production

artificial face

This image of ‘a person who does not exist’ is created by two adversarial AI systems – one creates faces and the other detects flaws (ie. it looks for faces it detects to  be artificial). Working together they refine the collective ability of the system to produce artificial faces. You can see the system in operation here.

Info on how it works is here  and there is an article about the use of these faces in social media here.

The images themselves are produced without direct human intervention. The systems are, however, produced by humans, and the images are created from other images, some of which have been created by humans (and others by capture systems such as CCTV).

Increasingly deep learning AI systems are being trained using images, such as the deep convolutional neural network platform DeepMind, which learns through ‘observation’ of massive collections of images. Humans are, of course, involved in this, not just in the creation of the systems, but also in originating (some) or the images and being the subjects of (some) of the images. As MacKensie and Munster (2019) point out, not only do images we post on platforms such as Facebook feed into these collections, but the image capture chips on the devices we use (such as smart phones) prepare the images we make for this process (of image data extraction).

‘The A11 Bionic released in 2017, iPhone 8’s chip, is optimized for image and video signal processing with a 64-bit and 6-core processor. But it is also optimized to work for machine learning using Apple’s CoreML platform. This ‘platform’ (in a localized sense) enhances image and facial recognition among its raft of AI capabilities, which also include object detection and natural language processing.’ (p.13)

These devices are perhaps more accurately viewed not as cameras, but as image sensors that produce data in a chain of operations in the formation of AI neural networks. It’s not that humans are not involved in the making of images that is changing, but rather how we are involved and what is ultimately created in the process of image/data production when we ‘take a picture’ with these kinds of digital devices.

It was interesting to do this task (seeking ‘non-human’ sources of images) alongside listening to Simon Norfolk’s reflections on the redundancy and poverty of contemporary photographic practice (and education) in his interview with Ben Smith (A Small Voice podcast, 12th June 2019). Both reinforce the need to adopt a relational view of photography, which acknowledges differences between the fields in which photographic images are made, circulated, deployed and consumed, and manner in which what we consider photography to be (and to be able to do) is transformed as we move between contexts and domains of practice. I’ll pick up the issues raised in the Norfolk interview, and relate these to my own project and practice, in a subsequent post.

References

MacKenzie, A. & Munster, A. 2019. Platform Seeing: Image Ensembles and Their Invisualities. Theory, Culture & Society. Advance online publication [https://doi.org/10.1177/0263276419847508]

Norfolk, S. 2019. Interviewed by Ben Smith. A Small Voice [podcast], 107, 12th June 2019.