论文标题
Noisymix:增强模型的鲁棒性
NoisyMix: Boosting Model Robustness to Common Corruptions
论文作者
论文摘要
对于许多实际应用,获得稳定且稳健的统计性能比仅仅实现最新的预测测试准确性更为重要,因此神经网络的鲁棒性是一个越来越重要的主题。相关的是,数据增强方案已被证明可以改善有关输入扰动和域移位的鲁棒性。在此激励的基础上,我们引入了Noisymix,这是一种新型的训练方案,可促进稳定性,并利用输入和功能空间来提高模型鲁棒性和内域准确性。 Noisymix产生的模型始终是更强大的,并且提供了对类成员概率的估计值进行良好的估计。我们演示了Noisymix在一系列基准数据集中的好处,包括Imagenet-C,Imagenet-R和Imagenet-P。此外,我们提供理论来了解Noisymix的隐式正则化和鲁棒性。
For many real-world applications, obtaining stable and robust statistical performance is more important than simply achieving state-of-the-art predictive test accuracy, and thus robustness of neural networks is an increasingly important topic. Relatedly, data augmentation schemes have been shown to improve robustness with respect to input perturbations and domain shifts. Motivated by this, we introduce NoisyMix, a novel training scheme that promotes stability as well as leverages noisy augmentations in input and feature space to improve both model robustness and in-domain accuracy. NoisyMix produces models that are consistently more robust and that provide well-calibrated estimates of class membership probabilities. We demonstrate the benefits of NoisyMix on a range of benchmark datasets, including ImageNet-C, ImageNet-R, and ImageNet-P. Moreover, we provide theory to understand implicit regularization and robustness of NoisyMix.