论文标题
部分可观测时空混沌系统的无模型预测
Mutual Distillation Learning Network for Trajectory-User Linking
论文作者
论文摘要
将轨迹链接到生成它们的用户的轨迹 - 用户链接(TUL),由于登记入住移动性数据的稀疏性,这是一个充满挑战的问题。现有方法忽略了登机数据中对历史数据或丰富上下文特征的利用,从而导致任务的性能差。在本文中,我们提出了一个新型的相互蒸馏学习网络,以解决稀疏的入住移动性数据的问题,名为Vailtul。具体而言,维护由复发性神经网络(RNN)轨迹编码器组成,该编码器对输入轨迹的顺序模式和时间感知的变压器轨迹编码器进行建模,该模式可捕获相应的增强历史轨迹的长期依赖性。然后,在两个轨迹编码器之间传递了有关历史轨迹的知识,以指导两个编码器的学习以实现信息的相互蒸馏。两个现实世界中的登机手机数据集的实验结果表明,维持对最新基准的优势。我们的模型的源代码可从https://github.com/onedean/maintul获得。
Trajectory-User Linking (TUL), which links trajectories to users who generate them, has been a challenging problem due to the sparsity in check-in mobility data. Existing methods ignore the utilization of historical data or rich contextual features in check-in data, resulting in poor performance for TUL task. In this paper, we propose a novel Mutual distillation learning network to solve the TUL problem for sparse check-in mobility data, named MainTUL. Specifically, MainTUL is composed of a Recurrent Neural Network (RNN) trajectory encoder that models sequential patterns of input trajectory and a temporal-aware Transformer trajectory encoder that captures long-term time dependencies for the corresponding augmented historical trajectories. Then, the knowledge learned on historical trajectories is transferred between the two trajectory encoders to guide the learning of both encoders to achieve mutual distillation of information. Experimental results on two real-world check-in mobility datasets demonstrate the superiority of MainTUL against state-of-the-art baselines. The source code of our model is available at https://github.com/Onedean/MainTUL.