论文标题

可穿戴传感器系统中的对抗性可传递性

Adversarial Transferability in Wearable Sensor Systems

论文作者

Sah, Ramesh Kumar, Ghasemzadeh, Hassan

论文摘要

机器学习用于可穿戴传感器系统中的推理和决策。但是,最近的研究发现,机器学习算法很容易被对其输入的对抗性扰动而愚弄。更有趣的是,一个机器学习系统生成的对抗性示例也对其他系统有效。对抗性示例的这种属性称为转移性。在这项工作中,我们从以下角度研究了可穿戴传感器系统中对抗性可传递性的第一步:1)机器学习系统之间的可传递性,2)跨受试者的可传递性,3)跨传感器主体位置的可传递性和4)跨数据集的可传递性。在大多数情况下,我们发现强烈的不靶向可转移性。有针对性的攻击不太成功,成功得分从$ 0 \%$到$ 80 \%$。对抗性示例的可传递性取决于许多因素,例如包含来自所有受试者的数据,传感器身体位置,数据集中的样本数量,学习算法类型以及源和目标系统数据集的分布。当源和目标系统的数据分布变得更加不同时,对抗示例的可传递性会急剧降低。我们还为社区设计强大的传感器系统提供指南。

Machine learning is used for inference and decision making in wearable sensor systems. However, recent studies have found that machine learning algorithms are easily fooled by the addition of adversarial perturbations to their inputs. What is more interesting is that adversarial examples generated for one machine learning system is also effective against other systems. This property of adversarial examples is called transferability. In this work, we take the first stride in studying adversarial transferability in wearable sensor systems from the following perspectives: 1) transferability between machine learning systems, 2) transferability across subjects, 3) transferability across sensor body locations, and 4) transferability across datasets. We found strong untargeted transferability in most cases. Targeted attacks were less successful with success scores from $0\%$ to $80\%$. The transferability of adversarial examples depends on many factors such as the inclusion of data from all subjects, sensor body position, number of samples in the dataset, type of learning algorithm, and the distribution of source and target system dataset. The transferability of adversarial examples decreases sharply when the data distribution of the source and target system becomes more distinct. We also provide guidelines for the community for designing robust sensor systems.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源