论文标题
从单个图像学习3D部分组装
Learning 3D Part Assembly from a Single Image
论文作者
论文摘要
在许多应用中,自主组装是机器人的关键能力。对于此任务,在机器人技术中已经广泛研究了一些避免障碍,运动计划和执行器控制等问题。但是,在任务规范方面,可能性的空间仍未得到充实。为此,我们介绍了一个新的问题,单形图指导的3D部分组装以及基于学习的解决方案。我们从给定的完整零件和一个描绘整个组装物体的单个图像中研究家具组件的设置中研究了这个问题。在这种情况下存在多个挑战,包括处理零件之间的歧义(例如,背面和腿部担架的板条)和零件和零件子组件的3D姿势预测,无论是可见的还是被遮挡的。我们通过提出一条两种模块管道来解决这些问题,该管道利用强大的2D-3D对应关系和面向装配的图形消息通话来推断零件关系。在使用基于党的合成基准测试的实验中,我们与三种基线方法相比,证明了框架的有效性。
Autonomous assembly is a crucial capability for robots in many applications. For this task, several problems such as obstacle avoidance, motion planning, and actuator control have been extensively studied in robotics. However, when it comes to task specification, the space of possibilities remains underexplored. Towards this end, we introduce a novel problem, single-image-guided 3D part assembly, along with a learningbased solution. We study this problem in the setting of furniture assembly from a given complete set of parts and a single image depicting the entire assembled object. Multiple challenges exist in this setting, including handling ambiguity among parts (e.g., slats in a chair back and leg stretchers) and 3D pose prediction for parts and part subassemblies, whether visible or occluded. We address these issues by proposing a two-module pipeline that leverages strong 2D-3D correspondences and assembly-oriented graph message-passing to infer part relationships. In experiments with a PartNet-based synthetic benchmark, we demonstrate the effectiveness of our framework as compared with three baseline approaches.