论文标题

通过自我鉴定和负抽样来检测自我监督的异常检测

Self-Supervised Anomaly Detection by Self-Distillation and Negative Sampling

论文作者

Rafiee, Nima, Gholamipoorfard, Rahil, Adaloglou, Nikolas, Jaxy, Simon, Ramakers, Julius, Kollmann, Markus

论文摘要

检测示例是属于给定的分布还是分发(OOD)需要识别特定于分配的特征。在没有标签的情况下,可以通过自我监督的技术来学习这些特征,即与来自同一域中的其他分布相比,最抽象的特征是最抽象的特征是统计学上最有代表性的。在这项工作中,我们表明,对分布训练的自我验证以及与转移辅助数据转换得出的负面例子的对比,强烈改善了OOD检测。我们发现这种改进取决于如何生成负样本。特别是,我们观察到,通过利用阴性样本来保持低级特征的统计数据,同时改变了高级语义,可以获得更高的平均检测性能。此外,可以从OOD检测评分的灵敏度中确定良好的负抽样策略。在各种OOD检测问题中证明了我们方法的效率,为视觉域中的无监督OOD检测设定了新的基准测试。

Detecting whether examples belong to a given in-distribution or are Out-Of-Distribution (OOD) requires identifying features specific to the in-distribution. In the absence of labels, these features can be learned by self-supervised techniques under the generic assumption that the most abstract features are those which are statistically most over-represented in comparison to other distributions from the same domain. In this work, we show that self-distillation of the in-distribution training set together with contrasting against negative examples derived from shifting transformation of auxiliary data strongly improves OOD detection. We find that this improvement depends on how the negative samples are generated. In particular, we observe that by leveraging negative samples, which keep the statistics of low-level features while changing the high-level semantics, higher average detection performance is obtained. Furthermore, good negative sampling strategies can be identified from the sensitivity of the OOD detection score. The efficiency of our approach is demonstrated across a diverse range of OOD detection problems, setting new benchmarks for unsupervised OOD detection in the visual domain.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源