论文标题
适应性扰动模式:可靠侵入检测的现实对抗学习
Adaptative Perturbation Patterns: Realistic Adversarial Learning for Robust Intrusion Detection
论文作者
论文摘要
对抗性攻击对机器学习和依赖它的系统构成了重大威胁。在网络安全域中,尤其令人担忧。尽管如此,使用表格数据生成的域生成的示例必须在该域内现实。这项工作确定了实现现实主义所需的基本约束级别,并介绍了适应性的摄动模式方法(A2PM),以在灰色盒子设置中实现这些约束。 A2PM依赖于独立适应每个类特征的模式序列来创建有效且相干的数据扰动。在网络安全案例研究中评估了所提出的方法:企业和物联网(IoT)网络。使用CIC-IDS2017和IOT-23数据集创建了多层感知器(MLP)和随机森林(RF)分类器。在每种情况下,针对分类器进行了针对性和无目标的攻击,并将生成的示例与原始网络流量流进行比较以评估其现实主义。获得的结果表明,A2PM提供了可扩展的现实对抗实例,这对于对抗性训练和攻击都可能是有利的。
Adversarial attacks pose a major threat to machine learning and to the systems that rely on it. In the cybersecurity domain, adversarial cyber-attack examples capable of evading detection are especially concerning. Nonetheless, an example generated for a domain with tabular data must be realistic within that domain. This work establishes the fundamental constraint levels required to achieve realism and introduces the Adaptative Perturbation Pattern Method (A2PM) to fulfill these constraints in a gray-box setting. A2PM relies on pattern sequences that are independently adapted to the characteristics of each class to create valid and coherent data perturbations. The proposed method was evaluated in a cybersecurity case study with two scenarios: Enterprise and Internet of Things (IoT) networks. Multilayer Perceptron (MLP) and Random Forest (RF) classifiers were created with regular and adversarial training, using the CIC-IDS2017 and IoT-23 datasets. In each scenario, targeted and untargeted attacks were performed against the classifiers, and the generated examples were compared with the original network traffic flows to assess their realism. The obtained results demonstrate that A2PM provides a scalable generation of realistic adversarial examples, which can be advantageous for both adversarial training and attacks.