论文标题
对机器学习网络安全防御的对抗性攻击工业控制系统
Adversarial Attacks on Machine Learning Cybersecurity Defences in Industrial Control Systems
论文作者
论文摘要
基于机器学习的侵入检测系统(ID)的增殖和应用允许在工业控制系统(ICS)中自动检测网络攻击的自动检测方面具有更大的灵活性和效率。但是,此类IDS的引入也创造了一个额外的攻击向量。学习模型也可能受到网络攻击的影响,否则称为对抗机器学习(AML)。这种攻击可能在ICS系统中产生严重的后果,因为对手可能会绕过ID。这可能会导致攻击检测延迟,这可能导致基础设施损失,财务损失甚至生命损失。本文探讨了如何通过使用基于雅各布的显着性图攻击和探索分类行为来生成对抗样本,可以使用对抗性学习来针对监督模型。该分析还包括探索此类样品如何使用对抗训练来支持监督模型的鲁棒性。使用真实的电源系统数据集来支持此处介绍的实验。总体而言,存在两个广泛使用的分类器的分类性能,即随机森林和J48,当存在对抗样品时,分类降低了16个和20个百分点。对抗性训练后,他们的表现有所改善,证明了他们对这种攻击的稳健性。
The proliferation and application of machine learning based Intrusion Detection Systems (IDS) have allowed for more flexibility and efficiency in the automated detection of cyber attacks in Industrial Control Systems (ICS). However, the introduction of such IDSs has also created an additional attack vector; the learning models may also be subject to cyber attacks, otherwise referred to as Adversarial Machine Learning (AML). Such attacks may have severe consequences in ICS systems, as adversaries could potentially bypass the IDS. This could lead to delayed attack detection which may result in infrastructure damages, financial loss, and even loss of life. This paper explores how adversarial learning can be used to target supervised models by generating adversarial samples using the Jacobian-based Saliency Map attack and exploring classification behaviours. The analysis also includes the exploration of how such samples can support the robustness of supervised models using adversarial training. An authentic power system dataset was used to support the experiments presented herein. Overall, the classification performance of two widely used classifiers, Random Forest and J48, decreased by 16 and 20 percentage points when adversarial samples were present. Their performances improved following adversarial training, demonstrating their robustness towards such attacks.