首页    期刊浏览 2025年02月28日 星期五
登录注册

文章基本信息

  • 标题:APE-GAN++: An Improved APE-GAN to Eliminate Adversarial Perturbations
  • 本地全文:下载
  • 作者:Rui Yang ; Xiu-Qing Chen ; Tian-Jie Cao
  • 期刊名称:IAENG International Journal of Computer Science
  • 印刷版ISSN:1819-656X
  • 电子版ISSN:1819-9224
  • 出版年度:2021
  • 卷号:48
  • 期号:3
  • 语种:English
  • 出版社:IAENG - International Association of Engineers
  • 摘要:Deep neural networks (DNNs) have been deployed successfully in various scenarios, but numerous studies have shown that deep neural networks are vulnerable to the attack of adversarial examples. In order to protect deep neural networks against adversarial examples, a lot of countermeasures have been developed. The APE-GAN is one of these proposed countermeasures, which employ a generative adversarial network (GAN) to eliminate adversarial perturbations. Although it performs more excellently than other countermeasures, it still has some shortcomings. First, its training process is precarious and has a vanishing gradient problem. Second, its performance can be improved further. In this paper, we propose the APE-GAN++, which is an enhanced APE-GAN, to overcome its disadvantages. First, the proposed APE-GAN++ utilizes the WGAN-GP loss to make the training process stable. Then, it uses a newly added third-party classification loss to enhance the capacity of the generator to eliminate adversarial perturbations. Experiments are conducted on the MNIST and CIFAR-10 datasets to verify the proposed APE-GAN++’s performance. Experimental results show that the proposed APE-GAN++ has a stable training process and solves the vanishing gradient problem. Besides, it can also achieve a more excellent performance than other countermeasures when defending against adversarial examples. Experimental code is available at https://github.com/Afreadyang/APE-GAN-Plus-Plus.
  • 关键词:Adversarial example;Deep neural network;Generative adversarial network;AI security;APE-GAN
国家哲学社会科学文献中心版权所有