论文标题
Primitive3D:从随机组装的原始图中合成3D对象数据集
Primitive3D: 3D Object Dataset Synthesis from Randomly Assembled Primitives
论文作者
论文摘要
深度学习的许多进步可以归因于访问大规模和通知的数据集。但是,由于收集成本的大量收集成本,这种数据集在3D计算机视觉中非常昂贵。为了减轻此问题,我们提出了一种具有成本效益的方法,用于自动生成带有注释的大量3D对象。特别是,我们仅通过组装多个随机原始图来综合对象。因此,这些对象是由源自原语的零件标签自动注释的。这使我们能够通过将监督分段与无监督的重建相结合来执行多任务学习。考虑到生成的数据集中学习的大开销,我们进一步提出了一个数据集蒸馏策略,以删除有关目标数据集的冗余示例。我们对3D对象分类的下游任务进行了广泛的实验。结果表明,与其他常用数据集相比,我们的数据集以及对其注释进行了多任务预处理,可以达到最佳性能。进一步的研究表明,我们的策略可以通过预处理和微调方案来改善模型性能,尤其是对于小规模的数据集而言。此外,通过提出的数据集蒸馏方法进行预处理可以节省86 \%的预期时间,而性能降解可降低。我们预计我们的尝试为培训3D深模型提供了一种新的以数据为中心的观点。
Numerous advancements in deep learning can be attributed to the access to large-scale and well-annotated datasets. However, such a dataset is prohibitively expensive in 3D computer vision due to the substantial collection cost. To alleviate this issue, we propose a cost-effective method for automatically generating a large amount of 3D objects with annotations. In particular, we synthesize objects simply by assembling multiple random primitives. These objects are thus auto-annotated with part labels originating from primitives. This allows us to perform multi-task learning by combining the supervised segmentation with unsupervised reconstruction. Considering the large overhead of learning on the generated dataset, we further propose a dataset distillation strategy to remove redundant samples regarding a target dataset. We conduct extensive experiments for the downstream tasks of 3D object classification. The results indicate that our dataset, together with multi-task pretraining on its annotations, achieves the best performance compared to other commonly used datasets. Further study suggests that our strategy can improve the model performance by pretraining and fine-tuning scheme, especially for the dataset with a small scale. In addition, pretraining with the proposed dataset distillation method can save 86\% of the pretraining time with negligible performance degradation. We expect that our attempt provides a new data-centric perspective for training 3D deep models.