论文标题

无监督的3D域适应的自我介绍

Self-Distillation for Unsupervised 3D Domain Adaptation

论文作者

Cardace, Adriano, Spezialetti, Riccardo, Ramirez, Pierluigi Zama, Salti, Samuele, Di Stefano, Luigi

论文摘要

点云分类是3D视觉中的一项流行任务。但是,以前的工作通常假定测试时间的点云是通过与训练时间相同的过程或传感器获得的。相反,无监督的域改编(UDA)破坏了此假设,并试图在未标记的目标域上解决任务,仅利用监督的源域。对于点云分类,最近的UDA方法试图通过辅助任务(例如Point Cloud Refonstruction)对跨域的特征对齐,但是这些任务并未优化特征空间中目标域中的判别能力。相比之下,在这项工作中,我们专注于为目标域及其增强版本之间的目标域获得一个区分性特征空间。然后,我们提出了一种新型的迭代自我训练方法,该方法利用UDA上下文中的图形神经网络来改进伪标记。我们执行广泛的实验,并在标准UDA基准测试中为点云分类设置新的最新实验。最后,我们展示了如何将我们的方法扩展到更复杂的任务,例如部分细分。

Point cloud classification is a popular task in 3D vision. However, previous works, usually assume that point clouds at test time are obtained with the same procedure or sensor as those at training time. Unsupervised Domain Adaptation (UDA) instead, breaks this assumption and tries to solve the task on an unlabeled target domain, leveraging only on a supervised source domain. For point cloud classification, recent UDA methods try to align features across domains via auxiliary tasks such as point cloud reconstruction, which however do not optimize the discriminative power in the target domain in feature space. In contrast, in this work, we focus on obtaining a discriminative feature space for the target domain enforcing consistency between a point cloud and its augmented version. We then propose a novel iterative self-training methodology that exploits Graph Neural Networks in the UDA context to refine pseudo-labels. We perform extensive experiments and set the new state-of-the-art in standard UDA benchmarks for point cloud classification. Finally, we show how our approach can be extended to more complex tasks such as part segmentation.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源