LT-GAN: Self-Supervised GAN With Latent Transformation Detection
Abstract
Generative Adversarial Networks (GANs) coupled with self-supervised tasks have shown promising results in unconditional and semi-supervised image generation. We propose a self-supervised approach (LT-GAN) to improve the generation quality and diversity of images by estimating the GAN-induced transformation (i.e. transformation induced in the generated images by perturbing the latent space of generator). Specifically, given two pairs of images where each pair comprises of a generated image and its transformed version, the self-supervision task aims to identify whether the latent transformation applied in the given pair is same as that of the other pair. Hence, this auxiliary loss encourages the generator to produce images that are distinguishable by the auxiliary network, which in turn promotes the synthesis of semantically consistent images with respect to latent transformations. We show the efficacy of this pretext task by improving the image generation quality in terms of FID on state-of-the-art models in conditional and unconditional settings on CIFAR-10, CelebA-HQ and ImageNet datasets. Moreover, we empirically show that LT-GAN helps in improving controlled image editing for CelebA-HQ, and ImageNet over baseline models. We experimentally demonstrate that our proposed LT self-supervision task can be effectively combined with other state-of-the-art training techniques for added benefits. Consequently, we show that our approach achieves the new state-of-the-art FID score of 9.8 on conditional CIFAR-10 image generation.
BibTeX
@conference{Patel-2021-126871,author = {Parth Patel and Nupur Kumari and Mayank Singh and Balaji Krishnamurthy},
title = {LT-GAN: Self-Supervised GAN With Latent Transformation Detection},
booktitle = {Proceedings of IEEE/CVF Winter Conference on Applications of Computer Vision (WACV '21)},
year = {2021},
month = {January},
pages = {3189 - 3198},
}