论文标题

基于随机平滑的防御对数据中毒的稳健性如何?

How Robust are Randomized Smoothing based Defenses to Data Poisoning?

论文作者

Mehra, Akshay, Kailkhura, Bhavya, Chen, Pin-Yu, Hamm, Jihun

论文摘要

在一个点附近,对坚固的分类器进行认证的预测保持恒定,使它们能够在保证的测试时间攻击方面具有弹性。在这项工作中,我们提出了对强大的机器学习模型的先前未知的威胁,该模型突出了训练数据质量在实现高认证的对抗性鲁棒性方面的重要性。具体而言,我们提出了一种新型的基于双重优化的数据中毒攻击,从而降低了可靠的鲁棒分类器的稳健性。与其他中毒攻击降低了一组较小的目标点上有毒模型的准确性不同,我们的攻击降低了数据集中整个目标类别的平均认证半径(ACR)。此外,即使受害人使用最新的强大训练方法(例如高斯数据增强\ cite \ cite {cohen2019certified},Macer \ cite {zhai202020macer}和SmoothAdv \ cite {salman2019profife for-certifed aborecialsife for-certife for-aborife for-aborife for-aborife for-corlial for a Robifitife and Ceriality,我们的攻击也是有效的。为了使攻击更难检测到,我们使用清洁标签的中毒点具有不可察觉的扭曲。通过中毒的MNIST和CIFAR10数据集中毒和使用前面提到的训练方法训练深层神经网络的有效性,并通过随机平滑训练鲁棒性。目标类别的ACR对于在生成的毒数据中训练的模型可以减少30 \%。此外,中毒的数据可转移到具有不同架构的不同训练方法和模型训练的模型。

Predictions of certifiably robust classifiers remain constant in a neighborhood of a point, making them resilient to test-time attacks with a guarantee. In this work, we present a previously unrecognized threat to robust machine learning models that highlights the importance of training-data quality in achieving high certified adversarial robustness. Specifically, we propose a novel bilevel optimization-based data poisoning attack that degrades the robustness guarantees of certifiably robust classifiers. Unlike other poisoning attacks that reduce the accuracy of the poisoned models on a small set of target points, our attack reduces the average certified radius (ACR) of an entire target class in the dataset. Moreover, our attack is effective even when the victim trains the models from scratch using state-of-the-art robust training methods such as Gaussian data augmentation\cite{cohen2019certified}, MACER\cite{zhai2020macer}, and SmoothAdv\cite{salman2019provably} that achieve high certified adversarial robustness. To make the attack harder to detect, we use clean-label poisoning points with imperceptible distortions. The effectiveness of the proposed method is evaluated by poisoning MNIST and CIFAR10 datasets and training deep neural networks using previously mentioned training methods and certifying the robustness with randomized smoothing. The ACR of the target class, for models trained on generated poison data, can be reduced by more than 30\%. Moreover, the poisoned data is transferable to models trained with different training methods and models with different architectures.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源