Work in progress R&D and studies:
A pre-trained deep neural network making predictions on live webcam input, trying to make sense of what it sees, in context of what it’s seen before.
It can see only what it already knows, just like us.
[vimeo 260612034 w=640 h=360]
[vimeo 243668526 w=640 h=360]
Originally inspired by the neural networks of our own brain, Deep Learning Artificial Intelligence algorithms have been around for decades, but they are recently seeing a huge rise in popularity. This is often attributed to recent increases in computing power and the availability of extensive training data. However, progress is undeniably fuelled by the multi-billion dollar investments from the purveyors of mass surveillance: technology companies whose business models rely on targeted, psychographic advertising; and government organisations and their War on Terror. Their aim is the automation of Understanding Big Data, i.e. understanding text, images and sounds. But what does it mean to ‘understand’? What does it mean to ‘learn’ or to ‘see’? Can a machine truly understand what it is seeing? Moreover, can it creatively reinterpret what it thinks it understands?
“Learning To See” is an ongoing series of works that use state-of-the-art Machine Learning algorithms as a means of reflecting on ourselves and how we make sense of the world. The picture we see in our conscious minds is not a direct representation of the outside world, or of what our senses deliver, but is of a simulated world, reconstructed based on our expectations and prior beliefs. Artificial neural networks loosely inspired by our own visual cortex look through surveillance cameras and try to make sense of what they are seeing. Of course they can see only what they already know. Just like us.
The work is part of a broader line of inquiry about self affirming cognitive biases, our inability to see the world from others’ point of view, and the resulting social polarisation.
The series consists of a number of studies, each motivated by related but different ideas.
Related work: Learning to See: Hello, World!
Learning to dream –
The Google Art dataset
A deep artificial neural network is trained on tens of thousands of images scraped from the Google Art Project, containing scans from art collections and museums from all over the world. These include paintings, illustrations, sketches and photographs covering landscapes, portraits, religious imagery, pastoral scenes, maritime scenes, scientific illustrations, prehistoric cave paintings, abstract images, cubist, realist paintings and many more – an extensive (yet vastly incomplete) archive of human imagination, feelings, desires and dreams; as catalogued by the Keeper of our collective consciousness, Google.
We have a very intimate connection with the cloud. We confide in it. We confess to it. We appeal to it. We share secrets with it, secrets that we wouldn’t share with our family or closest friends. And Google is the Keeper of our collective consciousness. It sees everything we see, knows everything we know, feels everything we feel. Living up in The Cloud, of all places, it watches over us, listening to our thoughts and dreams in ones and zeros. A digital god for a digital culture. And now, just as the Church – the previous bastion of our Spiritual Overseer – used to be the purveyor of Art & Culture; now Google – bastion of our new Digital Overseer – is moving into that role too.
We are made of star dust
A pre-trained deep neural network making predictions on live camera input, trying to make sense of what it sees, in context of what it’s seen before. It can see only what it already knows, just like us.
[vimeo 260612034 w=640 h=360]
In this case the network has been pre-trained on images from the Hubble Telescope. Everything that it sees, it can only make sense of in terms of stars, galaxies, nebulae, supernovae etc; in the hearts of which all elements in the universe were forged – including those in our own bodies. Machine Learning is a tool which combines humanity’s fascination with unlocking the mysteries of the universe; with our obsession with playing god, in both trying to understand and tame nature, but also in creating life and intelligence. And we are creating this intelligence in our own image, seeing everything tinted by its past experiences.
This is not ‘style transfer‘. In style transfer, the model contains information on a single image. These networks contain knowledge of the entire dataset, hundreds of thousands of images).
Pre-trained on images from the Hubble telescope:
[vimeo 242498070 w=640 h=360]
[vimeo 217364924 w=640 h=360]
Pre-trained on the Google Art dataset (source code and model for these on github):
[vimeo 215339817 w=640 h=640]
[vimeo 215514169 w=640 h=360]
[vimeo 218016207 w=640 h=360]
The Google Art dataset
Scraped from the Google Art project. A brief, incomplete survey of human (mostly western) Art. As collected by Google, Keeper of our collective consciousness. It sees everything we see, knows everything we know, feels everything we feel. Living up in The Cloud, of all places, it watches over us, listening to our thoughts and dreams in ones and zeros. And now, the new purveyor of Art & Culture.
[vimeo 221319023 w=640 h=360]
Super high resolution hallucinations
See super high resolution – 16384x1638px (256 megapixel) hallucinations from neural networks trained on the above dataset here (more on this below).
A deep neural network ‘Learning To See’. Each frame is the result of the network ‘learning’ one single iteration, and then re-evaluating, re-imagining and reconstructing what it knows.
… training on images from NASA’s Astronomy Pic Of The Day:
[vimeo 216498067 w=640 h=360]
… training on the Google Art dataset:
[vimeo 216069297 w=640 h=360]
… training on images scraped from the web of Donald Trump, Theresa May, Nigel Farage, Marine Le Pen, Recep Tayyip Erdogan:
(P.S. this is what happens when you have a dirty dataset).
[vimeo 216228463 w=640 h=360]
‘Hallucinations’ from the above trained networks
Hubble / NASA’s Astronomy Pic Of The Day:
Google Art dataset: