论文标题

Regmixup:混音作为正规器可以令人惊讶地提高准确性和稳健性

RegMixup: Mixup as a Regularizer Can Surprisingly Improve Accuracy and Out Distribution Robustness

论文作者

Pinto, Francesco, Yang, Harry, Lim, Ser-Nam, Torr, Philip H. S., Dokania, Puneet K.

论文摘要

我们表明,著名的混音的有效性[Zhang等,2018],如果而不是将其用作唯一的学习目标,则可以进一步改善,它被用作标准跨侧面损失的附加正规化器。这种简单的变化不仅提供了太大的准确性,而且在大多数情况下,在各种形式的协方差偏移和分布外检测实验下,在大多数情况下,混合量的预测不确定性估计质量都显着提高了。实际上,我们观察到混合物在检测到分布的样本时可能会产生大量退化的性能,因为我们在经验上表现出来,因为它倾向于学习在整个过程中表现出高渗透率的模型。很难区分分配样本和近分离样本。为了显示我们的方法的功效(Regmixup),我们在视觉数据集(ImageNet&Cifar-10/100)上提供了详尽的分析和实验,并将其与最新方法进行比较,以进行可靠的不确定性估计。

We show that the effectiveness of the well celebrated Mixup [Zhang et al., 2018] can be further improved if instead of using it as the sole learning objective, it is utilized as an additional regularizer to the standard cross-entropy loss. This simple change not only provides much improved accuracy but also significantly improves the quality of the predictive uncertainty estimation of Mixup in most cases under various forms of covariate shifts and out-of-distribution detection experiments. In fact, we observe that Mixup yields much degraded performance on detecting out-of-distribution samples possibly, as we show empirically, because of its tendency to learn models that exhibit high-entropy throughout; making it difficult to differentiate in-distribution samples from out-distribution ones. To show the efficacy of our approach (RegMixup), we provide thorough analyses and experiments on vision datasets (ImageNet & CIFAR-10/100) and compare it with a suite of recent approaches for reliable uncertainty estimation.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源