Generative Image Modeling using Style and Structure Adversarial Networks - Robotics Institute Carnegie Mellon University

Generative Image Modeling using Style and Structure Adversarial Networks

Conference Paper, Proceedings of (ECCV) European Conference on Computer Vision, pp. 318 - 335, October, 2016

Abstract

Current generative frameworks use end-to-end learning and generate images by sampling from uniform noise distribution. However, these approaches ignore the most basic principle of image formation: images are product of: (a) Structure: the underlying 3D model; (b) Style: the texture mapped onto structure. In this paper, we factorize the image generation process and propose Style and Structure Generative Adversarial Network (S^2-GAN). Our S^2-GAN has two components: the Structure-GAN generates a surface normal map; the Style-GAN takes the surface normal map as input and generates the 2D image. Apart from a real vs. generated loss function, we use an additional loss with computed surface normals from generated images. The two GANs are first trained independently, and then merged together via joint learning. We show our S^2-GAN model is interpretable, generates more realistic images and can be used to learn unsupervised RGBD representations.

BibTeX

@conference{Wang-2016-113332,
author = {Xiaolong Wang and Abhinav Gupta},
title = {Generative Image Modeling using Style and Structure Adversarial Networks},
booktitle = {Proceedings of (ECCV) European Conference on Computer Vision},
year = {2016},
month = {October},
pages = {318 - 335},
}