论文标题

通过最近类别探测OOD图像的预测

Probing Predictions on OOD Images via Nearest Categories

论文作者

Yang, Yao-Yuan, Rashtchian, Cyrus, Salakhutdinov, Ruslan, Chaudhuri, Kamalika

论文摘要

我们研究神经网络的分布外(OOD)预测行为,当它们从看不见的类或损坏的图像分类时。为了探究OOD行为,我们引入了一种新的度量,最近的类别概括(NCG),在其中计算了与训练集中与他们最近的邻居相同的标签分类的OOD输入的分数。我们的动机源于理解对抗性稳健网络的预测模式,因为先前的工作已经确定了训练的意外后果是对范围内的扰动的强大后果。我们发现,即使OOD数据远远超过了稳健性半径,稳健网络的NCG准确性始终比自然训练更高。这意味着稳健培训的本地正规化对网络的决策区域有重大影响。我们使用许多数据集复制发现,并比较新的和现有的培训方法。总体而言,对抗性稳健的网络在OOD数据方面类似于最近的邻居分类器。可在https://github.com/yangarbiter/nearest-category-generalization上获得代码。

We study out-of-distribution (OOD) prediction behavior of neural networks when they classify images from unseen classes or corrupted images. To probe the OOD behavior, we introduce a new measure, nearest category generalization (NCG), where we compute the fraction of OOD inputs that are classified with the same label as their nearest neighbor in the training set. Our motivation stems from understanding the prediction patterns of adversarially robust networks, since previous work has identified unexpected consequences of training to be robust to norm-bounded perturbations. We find that robust networks have consistently higher NCG accuracy than natural training, even when the OOD data is much farther away than the robustness radius. This implies that the local regularization of robust training has a significant impact on the network's decision regions. We replicate our findings using many datasets, comparing new and existing training methods. Overall, adversarially robust networks resemble a nearest neighbor classifier when it comes to OOD data. Code available at https://github.com/yangarbiter/nearest-category-generalization.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源