Google explains training neural networks with artistic images

Spread the love

Google Research publishes information about its research on neural networks. They do this on the basis of different types of images, both by feeding the network with images and vice versa: learning to make a network from random noise and everything in between.

The underlying techniques for neural networks provide interesting images. It shows that in the near future, art lovers may have to start wondering whether they are looking at a work made by man or a work conceived by a few sets of artificial neurons.

In any case, there is already a name for the new art movement: inceptionism. The flow consists of neural networks that are trained by analyzing millions of images, after which the network parameters are gradually adjusted until the classification the researchers want is obtained. Each picture is first put in the input layer, after which that layer ‘talks’ with the next layer until the output layer is reached. The network’s ‘response’ comes from the output layer.

The challenge for the researchers is to understand exactly what is happening in each layer. It is currently unknown what each layer detects. For example, a first layer may look at corners and edges, subsequent layers may interpret basic shapes, such as a door or an animal, and the last layers may become active on even more complex shapes, such as entire buildings. To find out what’s happening in the network, the procedure can be reversed by, for example, asking the network to imagine a picture of a banana from random noise, then it gets really interesting.

It turns out that networks that are trained to interpret different pictures contain a lot of information to represent subjects themselves, although it is not always correct, such as with the word ‘dumbbell’, which structurally shows part of a human arm:

The team has also conducted tests with interpretations in which it is not specified in advance what should be highlighted, but the network itself can decide what to see. Then a random picture is loaded and a layer of the neural network is asked to highlight what the network then detects. Because each layer of the neural network deals with abstraction differently, from just simple strokes to figures and whole images come out. By creating a feedback loop, more and more recognizable images are produced.

Source: Inception Image Gallery

Ultimately, of course, Google isn’t doing this job for fun. It is to understand and visualize how neural networks learn to perform difficult classification tasks, how to improve network architecture and what the network learned during training.

Update June 22, 11.38: Tweaker H!GHGuY rightly points out that the article is a very large simplification of reality. The article has been adapted accordingly.

You might also like
Exit mobile version