论文标题
利用人类的看法正规化转移学习
Using Human Perception to Regularize Transfer Learning
论文作者
论文摘要
机器学习社区的最新趋势表明,对人类感知测量的忠诚模型在视觉任务上表现出色。同样,人类的行为测量已被用于使模型性能正常。但是,我们可以通过从中获得的潜在知识转移到不同的学习目标中吗?在这项工作中,我们介绍了percep-tl(感知转移学习),这是一种通过模型中心理物理标签的正则化功能来改善转移学习的方法。我们证明了哪些模型受感知转移学习最大的影响,并发现具有高行为保真度的模型(包括视觉变形金刚)从该正则化中提高了多达1.9 \%的顶部@1个精度。这些发现表明,以生物学启发的学习剂可以从人类的行为测量中受益,因为正规化者和心理物理学的表示形式可以将其转移到独立的评估任务中。
Recent trends in the machine learning community show that models with fidelity toward human perceptual measurements perform strongly on vision tasks. Likewise, human behavioral measurements have been used to regularize model performance. But can we transfer latent knowledge gained from this across different learning objectives? In this work, we introduce PERCEP-TL (Perceptual Transfer Learning), a methodology for improving transfer learning with the regularization power of psychophysical labels in models. We demonstrate which models are affected the most by perceptual transfer learning and find that models with high behavioral fidelity -- including vision transformers -- improve the most from this regularization by as much as 1.9\% Top@1 accuracy points. These findings suggest that biologically inspired learning agents can benefit from human behavioral measurements as regularizers and psychophysical learned representations can be transferred to independent evaluation tasks.