论文标题

3D手对象关节重建的时间一致的自我训练,无监督的域适应

Unsupervised Domain Adaptation with Temporal-Consistent Self-Training for 3D Hand-Object Joint Reconstruction

论文作者

Qi, Mengshi, Remelli, Edoardo, Salzmann, Mathieu, Fua, Pascal

论文摘要

当有带注释的数据集可以训练它们以处理他们将在测试时遇到的情况和照明条件时,现在对手动3D姿势和形状估计的深度学习清除现在非常有效。不幸的是,情况并非总是如此,而且经常必须诉诸于合成数据的培训,这不能保证它们在实际情况下会很好地工作。在本文中,我们引入了一种有效的方法来解决这一挑战,通过在周期生成对抗网络(Cyclegan)中利用3D几何约束来执行域的适应性。此外,与大多数现有作品相比,这些作品无法利用无标记的真实视频作为监督来源的丰富时间信息,我们建议以一种自我监督的方式来实施短期和长期的时间一致性,以微调适应领域的模型。我们将证明,我们的方法优于三个广泛使用的基准测试的最先进的3D手动关节重建方法,并将使我们的代码公开可用。

Deep learning-solutions for hand-object 3D pose and shape estimation are now very effective when an annotated dataset is available to train them to handle the scenarios and lighting conditions they will encounter at test time. Unfortunately, this is not always the case, and one often has to resort to training them on synthetic data, which does not guarantee that they will work well in real situations. In this paper, we introduce an effective approach to addressing this challenge by exploiting 3D geometric constraints within a cycle generative adversarial network (CycleGAN) to perform domain adaptation. Furthermore, in contrast to most existing works, which fail to leverage the rich temporal information available in unlabeled real videos as a source of supervision, we propose to enforce short- and long-term temporal consistency to fine-tune the domain-adapted model in a self-supervised fashion. We will demonstrate that our approach outperforms state-of-the-art 3D hand-object joint reconstruction methods on three widely-used benchmarks and will make our code publicly available.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源