A Neural Net Creates Movies In The Style Of Any Artist |
Written by David Conrad |
Saturday, 14 May 2016 |
Deep Neural Networks have not only been impressing people with their AI ability, but also with their artistic abilities. The latest work can take the style of any artist and convert existing video into that style. You have to see it to believe it. Neural networks are good at generalizing. For example, you can use a trained network to morph an input image into something that has the style of a reference painting. The result is that the photo looks as if it had been painted by the artist.
Star Wars in the Style of Kandinsky. The process is a little more complicated than this because neural networks learn features which are about the content of the image and not style information. It is reasonable to suppose that the style information is in the correlations between the features. The stylised image is input to the network and its pixels are adjusted to minimise a loss function that includes its features and a measure of how far the feature correlations are from the image being used as the style example. Notice that this isn't a straightforward use of the neural network in the sense that the network doesn't learn the style of the artist - it just provides the features that are used to characterise the style. So far so good, but what about creating a movie in the style of an artist using the same technique. If you try the obvious approach and apply the network and optimization to each frame in turn then the result isn't very good because things jump about - there is no temporal consistency. To make a a good movie in the style of an artist you have to convert each frame with temporal constraints on how things change. This is the method investigated by the team at the University of Freiburg. They used a deep neural network approach to estimating the optical flow from one frame to the next Optical flow gives you a measure of how things move and change between frames. The idea is to compare the optical flow between the original and the restyled image and include the difference in the loss function used in the optimization. So now the optimization tries to keep the changes between frames the same, as well as the original features and the correlations in the style image. The exact details of how to run such an optimization is complicated and you can read about them in the paper. However, it is worth saying that it was implemented on a Nvidia Titan X GPU and took about eight to ten minutes for each 1024x436 frame. Now you know how it works, take a look at the video that explains the different constraints applied. Notice the unconstrained example at the start where the applied styles jumps about all over the place.
There is also a less technical video which is worth seeing to appreciate how well it works.
So it's fun but is it useful? You might not want to re-style an existing movie in the say the style of Picasso or "The Scream" but it opens up the possibly of creating new movies in the style of...
More InformationArtistic style transfer for videos Manuel Ruder, Alexey Dosovitskiy, Thomas Brox Related ArticlesAutomatic Colorizing Photos With A Neural Net TensorFlow - Googles Open Source AI And Computation Engine Microsoft Wins ImageNet Using Extremely Deep Neural Networks Baidu AI Team Caught Cheating - Banned For A Year From ImageNet Competition The Allen Institute's Semantic Scholar Removing Reflections And Obstructions From Photos See Invisible Motion, Hear Silent Sounds Cool? Creepy? Computational Camouflage Hides Things In Plain Sight Google Has Software To Make Cameras Worse
To be informed about new articles on I Programmer, sign up for our weekly newsletter, subscribe to the RSS feed and follow us on Twitter, Facebook or Linkedin.
Comments
or email your comment to: comments@i-programmer.info
|
Last Updated ( Wednesday, 08 April 2020 ) |