论文标题
元集:可推广表示的点集元学习
MetaSets: Meta-Learning on Point Sets for Generalizable Representations
论文作者
论文摘要
对点云的深度学习技术在一系列3D视觉任务上取得了强大的性能。但是,注释大规模点集是昂贵的,这使得学习可以在不同点集中良好转移的可概括表示方面至关重要。在本文中,我们研究了一个新的3D域概括(3DDG)的问题,其目标是将模型推广到其他看不见的点云领域,而无需在训练过程中访问它们。由于几何形状从模拟到真实数据的大量变化,因此由于过度拟合源域中的完整几何形状而导致的大多数现有3D模型的表现不佳,因此这是一个具有挑战性的问题。我们建议通过元集解决此问题,该元集中的元元素点云表示来自一组包含特定几何学先验的经过精心设计的转换点集的分类任务。学到的表示形式更易于概括不同几何的各种看不见的领域。我们设计了两个基准测试,用于对3D点云的SIM到实现转移。实验结果表明,元集优于大幅度的现有3D深度学习方法。
Deep learning techniques for point clouds have achieved strong performance on a range of 3D vision tasks. However, it is costly to annotate large-scale point sets, making it critical to learn generalizable representations that can transfer well across different point sets. In this paper, we study a new problem of 3D Domain Generalization (3DDG) with the goal to generalize the model to other unseen domains of point clouds without any access to them in the training process. It is a challenging problem due to the substantial geometry shift from simulated to real data, such that most existing 3D models underperform due to overfitting the complete geometries in the source domain. We propose to tackle this problem via MetaSets, which meta-learns point cloud representations from a group of classification tasks on carefully-designed transformed point sets containing specific geometry priors. The learned representations are more generalizable to various unseen domains of different geometries. We design two benchmarks for Sim-to-Real transfer of 3D point clouds. Experimental results show that MetaSets outperforms existing 3D deep learning methods by large margins.