Differentiable Augmentation for Data-Efficient GAN Training - Robotics Institute Carnegie Mellon University

Differentiable Augmentation for Data-Efficient GAN Training

Shengyu Zhao, Zhijian Liu, Ji Lin, Jun-Yan Zhu, and Song Han
Conference Paper, Proceedings of (NeurIPS) Neural Information Processing Systems, December, 2020

Abstract

The performance of generative adversarial networks (GANs) heavily deteriorates
given a limited amount of training data. This is mainly because the discriminator
is memorizing the exact training set. To combat it, we propose Differentiable
Augmentation (DiffAugment), a simple method that improves the data efficiency of
GANs by imposing various types of differentiable augmentations on both real and
fake samples. Previous attempts to directly augment the training data manipulate
the distribution of real images, yielding little benefit; DiffAugment enables us to
adopt the differentiable augmentation for the generated samples, effectively stabilizes training, and leads to better convergence. Experiments demonstrate consistent gains of our method over a variety of GAN architectures and loss functions for both unconditional and class-conditional generation. With DiffAugment, we achieve a state-of-the-art FID of 6.80 with an IS of 100.8 on ImageNet 128×128 and 2-4×
reductions of FID given 1,000 images on FFHQ and LSUN. Furthermore, with only
20% training data, we can match the top performance on CIFAR-10 and CIFAR-100.
Finally, our method can generate high-fidelity images using only 100 images without pre-training, while being on par with existing transfer learning algorithms. Code
is available at https://github.com/mit-han-lab/data-efficient-gans.

BibTeX

@conference{Zhao-2020-125667,
author = {Shengyu Zhao and Zhijian Liu, Ji Lin and Jun-Yan Zhu and Song Han},
title = {Differentiable Augmentation for Data-Efficient GAN Training},
booktitle = {Proceedings of (NeurIPS) Neural Information Processing Systems},
year = {2020},
month = {December},
}