论文标题

转移学习中的偏差何时转移?

When does Bias Transfer in Transfer Learning?

论文作者

Salman, Hadi, Jain, Saachi, Ilyas, Andrew, Engstrom, Logan, Wong, Eric, Madry, Aleksander

论文摘要

使用转移学习将预先训练的“源模型”调整为下游“目标任务”可以大大提高性能,而似乎没有弊端。在这项工作中,我们证明毕竟可能存在一个缺点:偏差转移,或即使将模型调整为目标类别后,源模型的偏见倾向也存在。通过合成和自然实验的组合,我们表明偏差转移(a)是在现实设置中(例如,当对图像网或其他标准数据集进行预训练时)以及(b)即使明确偏离目标数据集也可能发生。随着转移学习模型越来越多地在现实世界中部署,我们的工作突出了了解预训练源模型的局限性的重要性。代码可从https://github.com/madrylab/bias-transfer获得

Using transfer learning to adapt a pre-trained "source model" to a downstream "target task" can dramatically increase performance with seemingly no downside. In this work, we demonstrate that there can exist a downside after all: bias transfer, or the tendency for biases of the source model to persist even after adapting the model to the target class. Through a combination of synthetic and natural experiments, we show that bias transfer both (a) arises in realistic settings (such as when pre-training on ImageNet or other standard datasets) and (b) can occur even when the target dataset is explicitly de-biased. As transfer-learned models are increasingly deployed in the real world, our work highlights the importance of understanding the limitations of pre-trained source models. Code is available at https://github.com/MadryLab/bias-transfer

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源