论文标题
预先训练和微调表示的相似性
Similarity of Pre-trained and Fine-tuned Representations
论文作者
论文摘要
在转移学习中,只有网络的最后一部分 - 所谓的头部 - 经常进行微调。表示相似性分析表明,即使所有权重都可以更新,最重要的变化仍会发生在头部。但是,几乎没有学习的最新结果表明,早期层中的表示变化(主要是卷积)是有益的,尤其是在跨域适应的情况下。在我们的论文中,我们发现这是否也适用于转移学习。此外,我们分析了在训练和微调过程中转移学习中表示的变化,并确定如果不可用的话,预先训练的结构是未训练的。
In transfer learning, only the last part of the networks - the so-called head - is often fine-tuned. Representation similarity analysis shows that the most significant change still occurs in the head even if all weights are updatable. However, recent results from few-shot learning have shown that representation change in the early layers, which are mostly convolutional, is beneficial, especially in the case of cross-domain adaption. In our paper, we find out whether that also holds true for transfer learning. In addition, we analyze the change of representation in transfer learning, both during pre-training and fine-tuning, and find out that pre-trained structure is unlearned if not usable.