论文标题
AROS:一击人类立场的负担能力识别
AROS: Affordance Recognition with One-Shot Human Stances
论文作者
论文摘要
我们提出了ARO,这是一种单光学习方法,它使用了高度明显的人类姿势和3D场景之间相互作用的明确表示。该方法是一声的,因为该方法不需要重新训练即可添加新的负担实例。此外,仅需要一个或少数目标姿势的示例来描述相互作用。考虑到以前看不见的场景的3D网格,我们可以预测支持相互作用并产生相应铰接的3D人体的负担位置。我们在三个公共数据集上评估了具有不同程度噪声的真实环境的扫描。通过对众包评估的严格统计分析,结果表明,我们的单发方法的表现优于数据密集型基线,最高为80 \%。
We present AROS, a one-shot learning approach that uses an explicit representation of interactions between highly-articulated human poses and 3D scenes. The approach is one-shot as the method does not require re-training to add new affordance instances. Furthermore, only one or a small handful of examples of the target pose are needed to describe the interaction. Given a 3D mesh of a previously unseen scene, we can predict affordance locations that support the interactions and generate corresponding articulated 3D human bodies around them. We evaluate on three public datasets of scans of real environments with varied degrees of noise. Via rigorous statistical analysis of crowdsourced evaluations, results show that our one-shot approach outperforms data-intensive baselines by up to 80\%.