论文标题

通过视觉语言表示探索分布式检测

Delving into Out-of-Distribution Detection with Vision-Language Representations

论文作者

Ming, Yifei, Cai, Ziyang, Gu, Jiuxiang, Sun, Yiyou, Li, Wei, Li, Yixuan

论文摘要

识别分布(OOD)样本对于在开放世界中部署的机器学习系统至关重要。绝大多数OOD检测方法都是由单个模态(例如,视觉或语言)驱动的,将丰富的信息留在多模式表示中。受视觉预训练的最新成功的启发,本文丰富了从单模式到多模式制度的OOD检测的景观。特别是,我们提出了最大的概念匹配(MCM),这是一种基于与文本概念对齐的视觉特征,一种简单而有效的零击检测方法。我们为了解MCM的有效性做出了深入的分析和理论见解。广泛的实验表明,MCM在各种现实世界的任务上取得了卓越的性能。具有视觉语言的MCM在硬OOD任务上,具有纯粹的视觉特征的MCM优于公共基线,而语义上相似的类别则比13.1%(AUROC)。代码可在https://github.com/deeplearning-wisc/mcm上找到。

Recognizing out-of-distribution (OOD) samples is critical for machine learning systems deployed in the open world. The vast majority of OOD detection methods are driven by a single modality (e.g., either vision or language), leaving the rich information in multi-modal representations untapped. Inspired by the recent success of vision-language pre-training, this paper enriches the landscape of OOD detection from a single-modal to a multi-modal regime. Particularly, we propose Maximum Concept Matching (MCM), a simple yet effective zero-shot OOD detection method based on aligning visual features with textual concepts. We contribute in-depth analysis and theoretical insights to understand the effectiveness of MCM. Extensive experiments demonstrate that MCM achieves superior performance on a wide variety of real-world tasks. MCM with vision-language features outperforms a common baseline with pure visual features on a hard OOD task with semantically similar classes by 13.1% (AUROC). Code is available at https://github.com/deeplearning-wisc/MCM.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源