论文标题

通过计划分析证明决策树防止逃避攻击

Certifying Decision Trees Against Evasion Attacks by Program Analysis

论文作者

Calzavara, Stefano, Ferrara, Pietro, Lucchese, Claudio

论文摘要

事实证明,机器学习对于一系列不同的任务非常宝贵,但事实证明,它很容易受到逃避攻击的影响,即,对旨在迫使错误预测的输入数据进行恶意精心设计的输入数据。在本文中,我们提出了一种新颖的技术,以验证对表达威胁模型的逃避攻击的安全性,在这种情况下,攻击者可以由任意的命令计划代表。我们的方法利用决策树的可解释性特性将它们转化为当务之急的程序,这对于传统的程序分析技术很适合。通过利用抽象的解释框架,我们能够很好地验证通过公开可用数据集训练的决策树模型的安全保证。我们的实验表明,我们的技术既精确又有效,只能产生最少数量的假阳性,并扩展到竞争者方法上棘手的情况下。

Machine learning has proved invaluable for a range of different tasks, yet it also proved vulnerable to evasion attacks, i.e., maliciously crafted perturbations of input data designed to force mispredictions. In this paper we propose a novel technique to verify the security of decision tree models against evasion attacks with respect to an expressive threat model, where the attacker can be represented by an arbitrary imperative program. Our approach exploits the interpretability property of decision trees to transform them into imperative programs, which are amenable for traditional program analysis techniques. By leveraging the abstract interpretation framework, we are able to soundly verify the security guarantees of decision tree models trained over publicly available datasets. Our experiments show that our technique is both precise and efficient, yielding only a minimal number of false positives and scaling up to cases which are intractable for a competitor approach.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源