论文标题
安全性:捍卫对安全无设备的人类活动识别的对抗性攻击
SecureSense: Defending Adversarial Attack for Secure Device-Free Human Activity Recognition
论文作者
论文摘要
深度神经网络已授权精确的无设备的人类活动识别,该活动具有广泛的应用。 Deep Models可以从各种传感器中提取强大的功能,即使在诸如数据不足情况之类的具有挑战性的情况下,也可以很好地概括。但是,这些系统可能容易受到输入扰动的影响,即对抗性攻击。我们从经验上证明,黑框高斯攻击和现代对抗性白盒攻击都可以使其准确性下降。在本文中,我们首先指出,这种现象可以为无设备的传感系统带来严重的安全危害,然后提出一个新颖的学习框架,即固定,以捍卫常见的攻击。 CesuresSense旨在实现一致的预测,无论是否存在对其投入的攻击,从而减轻了由对抗性攻击引起的分配扰动的负面影响。广泛的实验表明,我们提出的方法可以显着增强现有深层模型的模型鲁棒性,克服可能的攻击。结果验证了我们的方法在无线人类活动识别和人识别系统上效果很好。据我们所知,这是研究对抗性攻击并进一步开发用于移动计算研究中无线人类活动识别的新型防御框架的第一项工作。
Deep neural networks have empowered accurate device-free human activity recognition, which has wide applications. Deep models can extract robust features from various sensors and generalize well even in challenging situations such as data-insufficient cases. However, these systems could be vulnerable to input perturbations, i.e. adversarial attacks. We empirically demonstrate that both black-box Gaussian attacks and modern adversarial white-box attacks can render their accuracies to plummet. In this paper, we firstly point out that such phenomenon can bring severe safety hazards to device-free sensing systems, and then propose a novel learning framework, SecureSense, to defend common attacks. SecureSense aims to achieve consistent predictions regardless of whether there exists an attack on its input or not, alleviating the negative effect of distribution perturbation caused by adversarial attacks. Extensive experiments demonstrate that our proposed method can significantly enhance the model robustness of existing deep models, overcoming possible attacks. The results validate that our method works well on wireless human activity recognition and person identification systems. To the best of our knowledge, this is the first work to investigate adversarial attacks and further develop a novel defense framework for wireless human activity recognition in mobile computing research.