论文标题
关于平均和最差的鲁棒性之间的适当可学习性
On Proper Learnability between Average- and Worst-case Robustness
论文作者
论文摘要
最近,Montasser等。 [2019]表明,有限的VC维度不足以实现适当的对抗性PAC学习。鉴于这种硬度,越来越多的努力来研究对对抗性强大的PAC学习设置的哪种类型的放松可以实现适当的学习性。在这项工作中,我们在放松最差的强大损失的情况下启动了适当学习的研究。我们给了一个强大的损失放松家庭,其中VC课程适当地学习,样品复杂性接近标准PAC学习设置所需的样本复杂性。另一方面,我们表明,对于最严重的稳健损失的现有自然放松,有限的VC维度不足以进行适当的学习。最后,我们为对抗性强大的经验风险最小化提供了新的概括保证。
Recently, Montasser et al. [2019] showed that finite VC dimension is not sufficient for proper adversarially robust PAC learning. In light of this hardness, there is a growing effort to study what type of relaxations to the adversarially robust PAC learning setup can enable proper learnability. In this work, we initiate the study of proper learning under relaxations of the worst-case robust loss. We give a family of robust loss relaxations under which VC classes are properly PAC learnable with sample complexity close to what one would require in the standard PAC learning setup. On the other hand, we show that for an existing and natural relaxation of the worst-case robust loss, finite VC dimension is not sufficient for proper learning. Lastly, we give new generalization guarantees for the adversarially robust empirical risk minimizer.