首页    期刊浏览 2025年02月27日 星期四
登录注册

文章基本信息

  • 标题:Adversarial Attacks Defense Method Based on Multiple Filtering and Image Rotation
  • 本地全文:下载
  • 作者:Feng Li ; Xuehui Du ; Liu Zhang
  • 期刊名称:Discrete Dynamics in Nature and Society
  • 印刷版ISSN:1026-0226
  • 电子版ISSN:1607-887X
  • 出版年度:2022
  • 卷号:2022
  • DOI:10.1155/2022/6124895
  • 语种:English
  • 出版社:Hindawi Publishing Corporation
  • 摘要:Adversarial examples in an image classification task cause neural networks to predict incorrect class labels with high confidence. Many applications related to image classification, such as self-driving and facial recognition, have been seriously threatened by adversarial attacks. One class of the existing defense methods is the preprocessing-based defense which transforms the inputs before feeding them to the system. These methods are independent of the classification models and have excellent defensive effects under oblivious attacks. An image filtering method is often used to evaluate the robustness of adversarial examples. However, filtering induces the loss of valuable features that reduce the classification accuracy and weakens the adversarial perturbation. Furthermore, the fixed filtering parameters cannot effectively defend against the adversarial attack. This paper proposes a novel defense method based on different filter parameters and randomly rotated filtered images. The output classification probabilities are statistically averaged, which keeps the classification accuracy while removing the perturbation. Experimental results show that the proposed method improves the defense capability of various models against diverse kinds of oblivious adversarial attacks. Under the adaptive attack, the transferability of the adversarial examples among different models is significantly reduced.
国家哲学社会科学文献中心版权所有