论文标题

希望网:基于图形的手动姿势估计的模型

HOPE-Net: A Graph-based Model for Hand-Object Pose Estimation

论文作者

Doosti, Bardia, Naha, Shujon, Mirbagheri, Majid, Crandall, David

论文摘要

手动姿势估计(HOPE)旨在共同检测手和固定物体的姿势。在本文中,我们提出了一个名为“ Hope-Net”的轻量级模型,该模型可实时估算2D和3D的手姿势。我们的网络使用两个自适应图卷积神经网络的级联,一个是估计手接缝和物体角的2D坐标,然后是另一个将2D坐标转换为3D。我们的实验表明,通过对完整网络的端到端培训,我们为2D和3D坐标估计问题获得了更好的准确性。提出的基于2D至3D图的模型可以应用于其他3D地标检测问题,在那里可以首先预测2D关键点,然后将其转换为3D。

Hand-object pose estimation (HOPE) aims to jointly detect the poses of both a hand and of a held object. In this paper, we propose a lightweight model called HOPE-Net which jointly estimates hand and object pose in 2D and 3D in real-time. Our network uses a cascade of two adaptive graph convolutional neural networks, one to estimate 2D coordinates of the hand joints and object corners, followed by another to convert 2D coordinates to 3D. Our experiments show that through end-to-end training of the full network, we achieve better accuracy for both the 2D and 3D coordinate estimation problems. The proposed 2D to 3D graph convolution-based model could be applied to other 3D landmark detection problems, where it is possible to first predict the 2D keypoints and then transform them to 3D.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源