Neural style transfer allows the style of one image to be imposed upon the content of another. The technique was first proposed by Gatys et al and has so far been used to create images in the fashion of particularly stylistic artists.
In this method a content image and a style image are put through layers of a pretrained image-identification model (in this case VGG-16). As the layers get deeper, the model begins to pay attention to more abstract and stylistic features of the image. A loss function compares these abstract features obtained from the stye image to the content image, and changes the content image to make it more similar.
After reading the paper, I implementing the model myself. Here's an example outcome with Kaganawa's waves and my github profile picture.
To use it yourself, run the jupyter notebook, changing PathyStyle to the path to the style image and PathContent to the path of your content image. Change the ratio of how heavily style and content loss are penalised when you call train.
Let me know if you use it, or make any excellent images!
-
Notifications
You must be signed in to change notification settings - Fork 0
SamLynnEvans/style_transfer
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
Artistic style transfer model based on Gatys et al paper
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published