학술논문

Example-Guided Style-Consistent Image Synthesis From Semantic Labeling
Document Type
Conference
Source
2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Computer Vision and Pattern Recognition (CVPR), 2019 IEEE/CVF Conference on. :1495-1504 Jun, 2019
Subject
Computing and Processing
Image and Video Synthesis
Deep Learning
Language
ISSN
2575-7075
Abstract
Example-guided image synthesis aims to synthesize an image from a semantic label map and an exemplary image indicating style. We use the term "style" in this problem to refer to implicit characteristics of images, for example: in portraits "style" includes gender, racial identity, age, hairstyle; in full body pictures it includes clothing; in street scenes it refers to weather and time of day and such like. A semantic label map in these cases indicates facial expression, full body pose, or scene segmentation. We propose a solution to the example-guided image synthesis problem using conditional generative adversarial networks with style consistency. Our key contributions are (i) a novel style consistency discriminator to determine whether a pair of images are consistent in style; (ii) an adaptive semantic consistency loss; and (iii) a training data sampling strategy, for synthesizing style-consistent results to the exemplar. We demonstrate the efficiency of our method on face, dance and street view synthesis tasks.