论文标题

对开放世界的机器学习的批判性评估

A Critical Evaluation of Open-World Machine Learning

论文作者

Song, Liwei, Sehwag, Vikash, Bhagoji, Arjun Nitin, Mittal, Prateek

论文摘要

开放世界的机器学习(ML)结合了经过分配数据训练的封闭世界模型与分布式(OOD)检测器,旨在检测和拒绝OOD输入。以前关于开放世界ML系统的工作通常无法在各种情况下以及可能的对抗条件下测试其可靠性。因此,在本文中,我们试图了解最先进的开放世界ML系统对系统组件的变化的弹性是如何的?通过对6个OOD检测器进行评估,我们发现分布数据的选择,模型体系结构和OOD数据对OOD检测性能有很大的影响,从而诱导了超过$ 70 \%$ $的假阳性率。我们进一步表明,具有22个无意损坏或对抗性扰动的OOD输入使开放世界的ML系统无法使用,误报率最高为$ 100 \%$ $。为了提高开放世界ML的弹性,我们将强大的分类器与OOD检测技术相结合,并发现了OOD检测和鲁棒性之间的新权衡。

Open-world machine learning (ML) combines closed-world models trained on in-distribution data with out-of-distribution (OOD) detectors, which aim to detect and reject OOD inputs. Previous works on open-world ML systems usually fail to test their reliability under diverse, and possibly adversarial conditions. Therefore, in this paper, we seek to understand how resilient are state-of-the-art open-world ML systems to changes in system components? With our evaluation across 6 OOD detectors, we find that the choice of in-distribution data, model architecture and OOD data have a strong impact on OOD detection performance, inducing false positive rates in excess of $70\%$. We further show that OOD inputs with 22 unintentional corruptions or adversarial perturbations render open-world ML systems unusable with false positive rates of up to $100\%$. To increase the resilience of open-world ML, we combine robust classifiers with OOD detection techniques and uncover a new trade-off between OOD detection and robustness.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源