Generating Adversarial Examples with Adversarial Networks
Abstract
Deep neural networks (DNNs) have been found to be vulnerable to adversarial examples resulting from adding small-magnitude perturbations to inputs. Such adversarial examples can mislead DNNs to produce adversary-selected results. Different attack strategies have been proposed to generate adversarial examples, but how to produce them with high perceptual quality and more efficiently requires more research efforts. In this paper, we propose AdvGAN to generate adversarial exam- ples with generative adversarial networks (GANs), which can learn and approximate the distribution of original instances. For AdvGAN, once the generator is trained, it can generate perturbations efficiently for any instance, so as to potentially accelerate adversarial training as defenses. We apply Adv- GAN in both semi-whitebox and black-box attack settings. In semi-whitebox attacks, there is no need to access the original target model after the generator is trained, in contrast to traditional white-box attacks. In black-box attacks, we dynamically train a distilled model for the black-box model and optimize the generator accordingly. Adversarial examples generated by AdvGAN on different target models have high attack success rate under state-of-the-art defenses compared to other attacks. Our attack has placed the first with 92.76% accuracy on a public MNIST black-box attack challenge.
BibTeX
@conference{Xiao-2018-125689,author = {Chaowei Xiao and Bo Li and Jun-Yan Zhu and Warren He and Mingyan Liu and Dawn Song},
title = {Generating Adversarial Examples with Adversarial Networks},
booktitle = {Proceedings of 27th International Joint Conference on Artificial Intelligence (IJCAI '18)},
year = {2018},
month = {July},
pages = {3905 - 3911},
}