Learning from Multi-domain Artistic Images for Arbitrary Style Transfer

Loading...
Thumbnail Image
Date
2019
Journal Title
Journal ISSN
Volume Title
Publisher
The Eurographics Association
Abstract
We propose a fast feed-forward network for arbitrary style transfer, which can generate stylized image for previously unseen content and style image pairs. Besides the traditional content and style representation based on deep features and statistics for textures, we use adversarial networks to regularize the generation of stylized images. Our adversarial network learns the intrinsic property of image styles from large-scale multi-domain artistic images. The adversarial training is challenging because both the input and output of our generator are diverse multi-domain images.We use a conditional generator that stylized content by shifting the statistics of deep features, and a conditional discriminator based on the coarse category of styles. Moreover, we propose a mask module to spatially decide the stylization level and stabilize adversarial training by avoiding mode collapse. As a side effect, our trained discriminator can be applied to rank and select representative stylized images. We qualitatively and quantitatively evaluate the proposed method, and compare with recent style transfer methods. We release our code and model at https://github.com/nightldj/behance_release.
Description

        
@inproceedings{
10.2312:exp.20191073
, booktitle = {
ACM/EG Expressive Symposium
}, editor = {
Kaplan, Craig S. and Forbes, Angus and DiVerdi, Stephen
}, title = {{
Learning from Multi-domain Artistic Images for Arbitrary Style Transfer
}}, author = {
Xu, Zheng
and
Wilber, Michael
and
Fang, Chen
and
Hertzmann, Aaron
and
Jin, Hailin
}, year = {
2019
}, publisher = {
The Eurographics Association
}, ISBN = {
978-3-03868-078-9
}, DOI = {
10.2312/exp.20191073
} }
Citation
Collections