论文标题

通过深入随机步行学习遥远的转移学习

Distant Transfer Learning via Deep Random Walk

论文作者

Xiao, Qiao, Zhang, Yu

论文摘要

转移学习是通过利用来自源域中有用的知识来提高目标领域的学习绩效的转移学习,通常要求这两个域非常接近,这限制了其应用程序范围。最近,已经研究了遥远的转移学习,以通过辅助领域在两个遥远甚至完全无关的域之间转移知识,这些辅助域通常是在人类传递推断精神的桥梁上,以使两个完全无关的概念通过逐渐的知识传递将其连接在一起。在本文中,我们通过提出一种深层随机步行的远处传递(DERWENT)方法来研究遥远的转移学习。与现有的遥远转移学习模型不同,这些模型通过辅助实例隐式地识别源和目标实例之间的知识传递路径,而拟议的Derwent模型可以通过深入的随机步行技术明确地学习此类路径。具体而言,基于由随机步行技术在数据图上识别的序列,在该数据图上源和目标数据没有直接边缘,提出的DERWENT模型在蹲下中强制执行相似的数据点,使结尾数据点由同一序列中的其他数据点表示,并处理源数据的加权训练损失。对几个基准数据集的实证研究表明,所提出的Derwent算法可产生最先进的性能。

Transfer learning, which is to improve the learning performance in the target domain by leveraging useful knowledge from the source domain, often requires that those two domains are very close, which limits its application scope. Recently, distant transfer learning has been studied to transfer knowledge between two distant or even totally unrelated domains via auxiliary domains that are usually unlabeled as a bridge in the spirit of human transitive inference that it is possible to connect two completely unrelated concepts together through gradual knowledge transfer. In this paper, we study distant transfer learning by proposing a DeEp Random Walk basEd distaNt Transfer (DERWENT) method. Different from existing distant transfer learning models that implicitly identify the path of knowledge transfer between the source and target instances through auxiliary instances, the proposed DERWENT model can explicitly learn such paths via the deep random walk technique. Specifically, based on sequences identified by the random walk technique on a data graph where source and target data have no direct edges, the proposed DERWENT model enforces adjacent data points in a squence to be similar, makes the ending data point be represented by other data points in the same sequence, and considers weighted training losses of source data. Empirical studies on several benchmark datasets demonstrate that the proposed DERWENT algorithm yields the state-of-the-art performance.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源