论文标题

深层网络是否会跨课程转移不变?

Do Deep Networks Transfer Invariances Across Classes?

论文作者

Zhou, Allan, Tajwar, Fahim, Robey, Alexander, Knowles, Tom, Pappas, George J., Hassani, Hamed, Finn, Chelsea

论文摘要

为了很好地概括,分类器必须学会成为不会改变输入类别的滋扰转换。许多问题具有与所有类别相似的“类别”的滋扰转换,例如照明和图像分类的背景更改。神经网络可以在给定足够数据的情况下学习这些不传球,但是许多现实世界中的数据集都严重不平衡,并且在大多数类中仅包含几个示例。因此,我们提出了一个问题:神经网络从大型阶级向小阶层传递出阶级的不变态度如何?通过仔细的实验​​,我们观察到,对班级反向转换的不变性仍然很大程度上取决于班级的大小,而网络在较小的类别上的不变性要少得多。即使使用数据平衡技术,该结果也会成立,并表明各个类别的不变性转移差。我们的结果提供了一个解释,说明了分类器为什么在不平衡和长尾分布上概括的概括。基于此分析,我们展示了学习滋扰转换的生成方法如何帮助跨阶级传递不变,并改善一组不平衡的图像分类基准的绩效。我们的实验源代码可在https://github.com/allanyangzhou/generative-invariance-transfer上获得。

To generalize well, classifiers must learn to be invariant to nuisance transformations that do not alter an input's class. Many problems have "class-agnostic" nuisance transformations that apply similarly to all classes, such as lighting and background changes for image classification. Neural networks can learn these invariances given sufficient data, but many real-world datasets are heavily class imbalanced and contain only a few examples for most of the classes. We therefore pose the question: how well do neural networks transfer class-agnostic invariances learned from the large classes to the small ones? Through careful experimentation, we observe that invariance to class-agnostic transformations is still heavily dependent on class size, with the networks being much less invariant on smaller classes. This result holds even when using data balancing techniques, and suggests poor invariance transfer across classes. Our results provide one explanation for why classifiers generalize poorly on unbalanced and long-tailed distributions. Based on this analysis, we show how a generative approach for learning the nuisance transformations can help transfer invariances across classes and improve performance on a set of imbalanced image classification benchmarks. Source code for our experiments is available at https://github.com/AllanYangZhou/generative-invariance-transfer.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源