论文标题

研究抗对抗者联邦学习模型的鲁棒性,检测IOT Spectrum传感器中的网络攻击

Studying the Robustness of Anti-adversarial Federated Learning Models Detecting Cyberattacks in IoT Spectrum Sensors

论文作者

Sánchez, Pedro Miguel Sánchez, Celdrán, Alberto Huertas, Schenk, Timo, Iten, Adrian Lars Benjamin, Bovet, Gérôme, Pérez, Gregorio Martínez, Stiller, Burkhard

论文摘要

设备指纹与机器和深度学习(ML/DL)相结合时报告了有希望的性能,当检测针对通过资源受限频谱传感器管理的数据的网络攻击。但是,训练模型所需的数据量以及此类场景的隐私问题限制了集中式ML/DL方法的适用性。联合学习(FL)通过创建联合和隐私的模型来解决这些局限性。但是,FL容易受到恶意参与者的影响,并且尚未研究对频谱传感数据伪造(SSDF)攻击对频谱传感器的侵蚀模型的影响。为了应对这一挑战,这项工作的第一个贡献是创建一个适合FL和建模的新型数据集,并对受不同SSDF攻击影响的资源受限频谱传感器进行了对资源受限的光谱传感器的行为(使用CPU,内存或文件系统的使用)。第二个贡献是一系列实验,以分析和比较联合模型的鲁棒性,根据i)频谱传感器的鲁棒性,ii)八次SSDF攻击,iii)四个场景,涉及无处不在的(异常检测)(异常检测)和受监督的(二进制分类)和联邦(二进制分类)联邦和33%的攻击,iv和33%的竞争者,并涉及四分之一的竞争力,并涉及范围的范围。充当抗对抗机制,以增加模型的鲁棒性。

Device fingerprinting combined with Machine and Deep Learning (ML/DL) report promising performance when detecting cyberattacks targeting data managed by resource-constrained spectrum sensors. However, the amount of data needed to train models and the privacy concerns of such scenarios limit the applicability of centralized ML/DL-based approaches. Federated learning (FL) addresses these limitations by creating federated and privacy-preserving models. However, FL is vulnerable to malicious participants, and the impact of adversarial attacks on federated models detecting spectrum sensing data falsification (SSDF) attacks on spectrum sensors has not been studied. To address this challenge, the first contribution of this work is the creation of a novel dataset suitable for FL and modeling the behavior (usage of CPU, memory, or file system, among others) of resource-constrained spectrum sensors affected by different SSDF attacks. The second contribution is a pool of experiments analyzing and comparing the robustness of federated models according to i) three families of spectrum sensors, ii) eight SSDF attacks, iii) four scenarios dealing with unsupervised (anomaly detection) and supervised (binary classification) federated models, iv) up to 33% of malicious participants implementing data and model poisoning attacks, and v) four aggregation functions acting as anti-adversarial mechanisms to increase the models robustness.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源