Diverse Image Generation via Self-Conditioned GANs
Conference Paper, Proceedings of (CVPR) Computer Vision and Pattern Recognition, pp. 14274 - 14283, June, 2020
Abstract
We introduce a simple but effective unsupervised method for generating diverse images. We train a class-conditional GAN model without using manually annotated class labels. Instead, our model is conditional on labels automatically derived from clustering in the discriminator's feature space. Our clustering step automatically discovers diverse modes, and explicitly requires the generator to cover them. Experiments on standard mode collapse benchmarks show that our method outperforms several competing methods when addressing mode collapse. Our method also performs well on large-scale datasets such as ImageNet and Places365, improving both diversity and standard metrics (e.g., Fréchet Inception Distance), compared to previous methods.
BibTeX
@conference{Liu-2020-125673,author = {Steven Liu and Tongzhou Wang and David Bau and Jun-Yan Zhu and Antonio Torralba},
title = {Diverse Image Generation via Self-Conditioned GANs},
booktitle = {Proceedings of (CVPR) Computer Vision and Pattern Recognition},
year = {2020},
month = {June},
pages = {14274 - 14283},
}
Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.