论文标题

分布模型的荟萃分析

A Meta-Analysis of Distributionally-Robust Models

论文作者

Feuer, Benjamin, Joshi, Ameya, Hegde, Chinmay

论文摘要

在大规模数据集中训练的最先进的图像分类器(例如ImageNet)已被证明容易受到一系列故意和偶然分配变化的影响。另一方面,最近出现了一些具有良好分布(OOD)鲁棒性特性的分类器,在其目标任务上达到了高度准确性,同时保持其在挑战性基准方面的分配准确性。我们对广泛发布的模型进行了荟萃分析,其中大多数已在过去的十二个月中发表。通过这项荟萃分析,我们从经验上确定了所有表现最佳的OOD模型的四个主要共同点,所有这些模型都阐明了视力语言预训练的巨大希望。

State-of-the-art image classifiers trained on massive datasets (such as ImageNet) have been shown to be vulnerable to a range of both intentional and incidental distribution shifts. On the other hand, several recent classifiers with favorable out-of-distribution (OOD) robustness properties have emerged, achieving high accuracy on their target tasks while maintaining their in-distribution accuracy on challenging benchmarks. We present a meta-analysis on a wide range of publicly released models, most of which have been published over the last twelve months. Through this meta-analysis, we empirically identify four main commonalities for all the best-performing OOD-robust models, all of which illuminate the considerable promise of vision-language pre-training.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源