论文标题

通过对抗攻击利用AMI中深度学习的能量盗窃检测的漏洞

Exploiting Vulnerabilities of Deep Learning-based Energy Theft Detection in AMI through Adversarial Attacks

论文作者

Li, Jiangnan, Yang, Yingyuan, Sun, Jinyuan Stella

论文摘要

有效检测能源盗窃可以防止公用事业公司的收入损失,对智能电网安全也很重要。近年来,通过大量细粒智能电表数据启用,深度学习(DL)方法在文献中变得流行,以检测高级计量基础设施(AMI)中的能源盗窃。但是,由于显示神经网络容易受到对抗性示例的影响,因此DL模型的安全性是令人关注的。 在这项工作中,我们通过对抗攻击(包括单步攻击和迭代攻击)来研究基于DL的能量盗窃检测的脆弱性。从攻击者的角度来看,我们设计\ textIt {搜索FromFree}框架,该框架由1)随机对抗性测量初始化方法,以最大化被盗的利润和2)2)一种步进尺寸的搜索方案,以提高黑盒迭代攻击的性能。基于三种类型的神经网络的评估表明,对抗性攻击者可以向实用程序报告极低的消耗量测量,而无需DL模型检测到。我们最终讨论了针对能量盗窃检测中对抗性攻击的潜在防御机制。

Effective detection of energy theft can prevent revenue losses of utility companies and is also important for smart grid security. In recent years, enabled by the massive fine-grained smart meter data, deep learning (DL) approaches are becoming popular in the literature to detect energy theft in the advanced metering infrastructure (AMI). However, as neural networks are shown to be vulnerable to adversarial examples, the security of the DL models is of concern. In this work, we study the vulnerabilities of DL-based energy theft detection through adversarial attacks, including single-step attacks and iterative attacks. From the attacker's point of view, we design the \textit{SearchFromFree} framework that consists of 1) a randomly adversarial measurement initialization approach to maximize the stolen profit and 2) a step-size searching scheme to increase the performance of black-box iterative attacks. The evaluation based on three types of neural networks shows that the adversarial attacker can report extremely low consumption measurements to the utility without being detected by the DL models. We finally discuss the potential defense mechanisms against adversarial attacks in energy theft detection.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源