Toward Multimodal Image-to-Image Translation - Robotics Institute Carnegie Mellon University

Toward Multimodal Image-to-Image Translation

Jun-Yan Zhu, Richard Zhang, Deepak Pathak, Trevor Darrell, Alexei A. Efros, Oliver Wang, and Eli Shechtman
Conference Paper, Proceedings of (NeurIPS) Neural Information Processing Systems, pp. 465 - 476, December, 2017

Abstract

Many image-to-image translation problems are ambiguous, as a single input image may correspond to multiple possible outputs. In this work, we aim to model a distribution of possible outputs in a conditional generative modeling setting. The ambiguity of the mapping is distilled in a low-dimensional latent vector, which can be randomly sampled at test time. A generator learns to map the given input, combined with this latent code, to the output. We explicitly encourage the connection between output and the latent code to be invertible. This helps prevent a many-to-one mapping from the latent code to the output during training, also known as the problem of mode collapse, and produces more diverse results. We explore several variants of this approach by employing different training objectives, network architectures, and methods of injecting the latent code. Our proposed method encourages bijective consistency between the latent encoding and output modes. We present a systematic comparison of our method and other variants on both perceptual realism and diversity.

BibTeX

@conference{Zhu-2017-125690,
author = {Jun-Yan Zhu and Richard Zhang and Deepak Pathak and Trevor Darrell and Alexei A. Efros and Oliver Wang and Eli Shechtman},
title = {Toward Multimodal Image-to-Image Translation},
booktitle = {Proceedings of (NeurIPS) Neural Information Processing Systems},
year = {2017},
month = {December},
pages = {465 - 476},
}