首页    期刊浏览 2024年12月14日 星期六
登录注册

文章基本信息

  • 标题:Adaptative Perturbation Patterns: Realistic Adversarial Learning for Robust Intrusion Detection
  • 本地全文:下载
  • 作者:João Vitorino ; Nuno Oliveira ; Isabel Praça
  • 期刊名称:Future Internet
  • 电子版ISSN:1999-5903
  • 出版年度:2022
  • 卷号:14
  • 期号:4
  • 页码:108
  • DOI:10.3390/fi14040108
  • 语种:English
  • 出版社:MDPI Publishing
  • 摘要:Adversarial attacks pose a major threat to machine learning and to the systems that rely on it. In the cybersecurity domain, adversarial cyber-attack examples capable of evading detection are especially concerning. Nonetheless, an example generated for a domain with tabular data must be realistic within that domain. This work establishes the fundamental constraint levels required to achieve realism and introduces the adaptative perturbation pattern method (A2PM) to fulfill these constraints in a gray-box setting. A2PM relies on pattern sequences that are independently adapted to the characteristics of each class to create valid and coherent data perturbations. The proposed method was evaluated in a cybersecurity case study with two scenarios: Enterprise and Internet of Things (IoT) networks. Multilayer perceptron (MLP) and random forest (RF) classifiers were created with regular and adversarial training, using the CIC-IDS2017 and IoT-23 datasets. In each scenario, targeted and untargeted attacks were performed against the classifiers, and the generated examples were compared with the original network traffic flows to assess their realism. The obtained results demonstrate that A2PM provides a scalable generation of realistic adversarial examples, which can be advantageous for both adversarial training and attacks.
国家哲学社会科学文献中心版权所有