论文标题

对象检测模型的会员推断攻击

Membership Inference Attacks Against Object Detection Models

论文作者

Park, Yeachan, Kang, Myungjoo

论文摘要

机器学习模型可以泄漏有关他们训练的数据集的信息。在本文中,我们提出了针对黑框对象检测模型的首次会员推理攻击,该攻击决定了培训中是否使用给定的数据记录。为了攻击对象检测模型,我们设计了一种名为“画布方法”的新颖方法,其中预测的边界框在攻击模型输入的空图像上绘制。根据实验,我们成功地揭示了使用一阶段和两阶段检测模型训练的私人敏感数据的成员身份。然后,我们提出国防策略,并在模型和数据集之间进行转移攻击。我们的结果表明,对象检测模型也很容易受到像其他模型一样的推理攻击。

Machine learning models can leak information regarding the dataset they have trained. In this paper, we present the first membership inference attack against black-boxed object detection models that determines whether the given data records are used in the training. To attack the object detection model, we devise a novel method named as called a canvas method, in which predicted bounding boxes are drawn on an empty image for the attack model input. Based on the experiments, we successfully reveal the membership status of privately sensitive data trained using one-stage and two-stage detection models. We then propose defense strategies and also conduct a transfer attack between the models and datasets. Our results show that object detection models are also vulnerable to inference attacks like other models.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源