Semantic Image Synthesis with Spatially-Adaptive Normalization - Robotics Institute Carnegie Mellon University

Semantic Image Synthesis with Spatially-Adaptive Normalization

Taesung Park, Ming-Yu Liu, Ting-Chun Wang, and Jun-Yan Zhu
Conference Paper, Proceedings of (CVPR) Computer Vision and Pattern Recognition, pp. 2332 - 2341, June, 2019

Abstract

We propose spatially-adaptive normalization, a simple but effective layer for synthesizing photorealistic images given an input semantic layout. Previous methods directly feed the semantic layout as input to the network, forcing the network to memorize the information throughout all the layers. Instead, we propose using the input layout for modulating the activations in normalization layers through a spatially-adaptive, learned affine transformation. Experiments on several challenging datasets demonstrate the superiority of our method compared to existing approaches, regarding both visual fidelity and alignment with input layouts. Finally, our model allows users to easily control the style and content of image synthesis results as well as create multi-modal results. Code is available upon publication.

Notes
Best Paper Finalist

BibTeX

@conference{Park-2019-125679,
author = {Taesung Park and Ming-Yu Liu and Ting-Chun Wang and Jun-Yan Zhu},
title = {Semantic Image Synthesis with Spatially-Adaptive Normalization},
booktitle = {Proceedings of (CVPR) Computer Vision and Pattern Recognition},
year = {2019},
month = {June},
pages = {2332 - 2341},
}