论文标题
学习可逆输出映射可以减轻神经网络中的简单性偏见
Learning an Invertible Output Mapping Can Mitigate Simplicity Bias in Neural Networks
论文作者
论文摘要
与训练分布相比,已知深层神经网络甚至对次要分布变化都是脆弱的。虽然一项工作表明,DNN的简单性偏见(SB) - 仅学习最简单的功能 - 是这种脆弱性的关键原因,但最近的另一项工作却令人惊讶地发现,骨干的确实是由骨架学习的,并且它们的紧绷性确实是依赖最简单的线性依靠最简单的功能而引起的。为了弥合这两条工作之间的差距,我们首先假设并验证,尽管SB可能无法完全排除学习复杂的功能,但它会扩大更简单的功能,而不是复杂的功能。也就是说,简单的功能在学习的表示中多次复制,而复杂的功能可能不会复制。这种现象,我们称为复制假设,再加上SGD的隐式偏见以收敛到特征空间中的最大边缘解决方案,这使模型主要依靠简单的特征进行分类。为了减轻这种偏见,我们提出了功能重建正规器(FRR),以确保可以从逻辑中重建学习的功能。在线性层训练(FRR-L)中使用{\ em frr}鼓励使用更多样化的功能进行分类。我们进一步建议通过冻结使用FRR-L训练的线性层的重量来确定整个网络,以完善学习的功能,从而更适合分类。使用这种简单的解决方案,我们在最近引入的具有极端分布变化的半合成数据集中证明了OOD准确性高达15%。此外,我们还对标准OOD基准域上的现有SOTA方法表明了值得注意的收益。
Deep Neural Networks are known to be brittle to even minor distribution shifts compared to the training distribution. While one line of work has demonstrated that Simplicity Bias (SB) of DNNs - bias towards learning only the simplest features - is a key reason for this brittleness, another recent line of work has surprisingly found that diverse/ complex features are indeed learned by the backbone, and their brittleness is due to the linear classification head relying primarily on the simplest features. To bridge the gap between these two lines of work, we first hypothesize and verify that while SB may not altogether preclude learning complex features, it amplifies simpler features over complex ones. Namely, simple features are replicated several times in the learned representations while complex features might not be replicated. This phenomenon, we term Feature Replication Hypothesis, coupled with the Implicit Bias of SGD to converge to maximum margin solutions in the feature space, leads the models to rely mostly on the simple features for classification. To mitigate this bias, we propose Feature Reconstruction Regularizer (FRR) to ensure that the learned features can be reconstructed back from the logits. The use of {\em FRR} in linear layer training (FRR-L) encourages the use of more diverse features for classification. We further propose to finetune the full network by freezing the weights of the linear layer trained using FRR-L, to refine the learned features, making them more suitable for classification. Using this simple solution, we demonstrate up to 15% gains in OOD accuracy on the recently introduced semi-synthetic datasets with extreme distribution shifts. Moreover, we demonstrate noteworthy gains over existing SOTA methods on the standard OOD benchmark DomainBed as well.