论文标题
机器学习模型的保护机制之间的相互作用
Conflicting Interactions Among Protection Mechanisms for Machine Learning Models
论文作者
论文摘要
如今,基于机器学习(ML)的系统被广泛用于不同域。鉴于它们的受欢迎程度,ML模型已成为各种攻击的目标。结果,安全/隐私与ML的交集的研究蓬勃发展。通常,此类工作集中在单个类型的安全/隐私问题及其缓解上。但是,在现实生活部署中,需要同时保护ML模型,以防止几个问题。针对一种安全性或隐私问题的最佳保护机制可能会与旨在解决其他问题的机制进行负相互作用。尽管具有实际意义,但这种冲突的潜力尚未得到充分研究。我们首先提供了分析此类“矛盾互动”的框架。然后,我们专注于系统地分析保护机制之间的成对相互作用,以进行一个关注,模型和数据所有权验证,并使用另外两种类别的ML保护机制:差异化私有培训和防止模型逃避的鲁棒性。我们发现几种成对相互作用会导致冲突。我们探索避免这种冲突的潜在方法。首先,我们研究了高参数松弛的效果,发现没有甜点平衡两种保护机制的性能。其次,我们探索是否修改一种类型的保护机制(所有权验证),以使其与可能受到冲突机制影响的因素(私人培训或稳健性以逃避模型)可以避免冲突。我们表明,这种方法可以避免与私人培训相结合的所有权验证机制之间的冲突,但对企业逃避的稳健性没有影响。最后,我们确定了研究其他类型的ML保护机制之间相互作用的差异。
Nowadays, systems based on machine learning (ML) are widely used in different domains. Given their popularity, ML models have become targets for various attacks. As a result, research at the intersection of security/privacy and ML has flourished. Typically such work has focused on individual types of security/privacy concerns and mitigations thereof. However, in real-life deployments, an ML model will need to be protected against several concerns simultaneously. A protection mechanism optimal for one security or privacy concern may interact negatively with mechanisms intended to address other concerns. Despite its practical relevance, the potential for such conflicts has not been studied adequately. We first provide a framework for analyzing such "conflicting interactions". We then focus on systematically analyzing pairwise interactions between protection mechanisms for one concern, model and data ownership verification, with two other classes of ML protection mechanisms: differentially private training, and robustness against model evasion. We find that several pairwise interactions result in conflicts. We explore potential approaches for avoiding such conflicts. First, we study the effect of hyperparameter relaxations, finding that there is no sweet spot balancing the performance of both protection mechanisms. Second, we explore if modifying one type of protection mechanism (ownership verification) so as to decouple it from factors that may be impacted by a conflicting mechanism (differentially private training or robustness to model evasion) can avoid conflict. We show that this approach can avoid the conflict between ownership verification mechanisms when combined with differentially private training, but has no effect on robustness to model evasion. Finally, we identify the gaps in the landscape of studying interactions between other types of ML protection mechanisms.