论文标题

在具有共形预测的自我监督模型中朝着可靠的零射击分类

Towards Reliable Zero Shot Classification in Self-Supervised Models with Conformal Prediction

论文作者

Kumar, Bhawesh, Palepu, Anil, Tuwani, Rudraksh, Beam, Andrew

论文摘要

经过对比损失(例如剪辑)训练的自我监督模型在零击分类设置中表现非常强大。但是,要用作零击分类器,这些模型要求用户在测试时在固定的标签上提供新的字幕。在许多设置中,很难或不可能知道新的查询字幕是否与用于训练模型的源字幕兼容。我们通过将零摄像分类任务作为一个异常检测问题来解决这些局限性,并制定共形预测程序,以评估何时可以可靠地使用给定的测试字幕。在一个现实世界的医学示例中,我们表明我们提出的保形过程提高了零摄像分类设置中夹式模型的可靠性,并且我们对可能影响其性能的因素提供了经验分析。

Self-supervised models trained with a contrastive loss such as CLIP have shown to be very powerful in zero-shot classification settings. However, to be used as a zero-shot classifier these models require the user to provide new captions over a fixed set of labels at test time. In many settings, it is hard or impossible to know if a new query caption is compatible with the source captions used to train the model. We address these limitations by framing the zero-shot classification task as an outlier detection problem and develop a conformal prediction procedure to assess when a given test caption may be reliably used. On a real-world medical example, we show that our proposed conformal procedure improves the reliability of CLIP-style models in the zero-shot classification setting, and we provide an empirical analysis of the factors that may affect its performance.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源