论文标题

DGGAN:3D手姿势估计中解开RGB和深度图像的深度图像引导的生成对抗网络

DGGAN: Depth-image Guided Generative Adversarial Networks for Disentangling RGB and Depth Images in 3D Hand Pose Estimation

论文作者

Chen, Liangjian, Lin, Shih-Yao, Xie, Yusheng, Lin, Yen-Yu, Fan, Wei, Xie, Xiaohui

论文摘要

从RGB图像中估算3D手姿势对于广泛的潜在应用至关重要,但是在RGB图像中深度深度内形式中的重大歧义方面具有挑战性。最先进的估计器通过在训练期间正规化3D手姿势估计模型来对此问题进行广告,从而在预测3D姿势和基础真实深度图的训练过程中实施一致性。但是,这些估计器依赖于RGB图像和训练过程中的PAIR深度图。在这项研究中,我们提出条件生成对抗网络(GAN)模型,称为Depth-Image引导GAN(DGGAN),以生成以输入RGB图像为条件的重新建立深度图,并使用合成的深度映射,以使3D手动估计模型正常化,因此消除了需要的地位,从而消除了所需的地面深度映射。多基础基准数据集上的实验结果表明,DGGAN生产的合成深度映射在正规化POSSO估计模型方面非常有效,从而产生了新的最新结果估计准确性,显着降低了Hee3D终点错误(EPE),而RHD,STASB在16.5%和6.8%上,均为16.5%,和6.8%。

Estimating3D hand poses from RGB images is essentialto a wide range of potential applications, but is challengingowing to substantial ambiguity in the inference of depth in-formation from RGB images. State-of-the-art estimators ad-dress this problem by regularizing3D hand pose estimationmodels during training to enforce the consistency betweenthe predicted3D poses and the ground-truth depth maps.However, these estimators rely on both RGB images and thepaired depth maps during training. In this study, we proposea conditional generative adversarial network (GAN) model,called Depth-image Guided GAN (DGGAN), to generate re-alistic depth maps conditioned on the input RGB image, anduse the synthesized depth maps to regularize the3D handpose estimation model, therefore eliminating the need forground-truth depth maps. Experimental results on multiplebenchmark datasets show that the synthesized depth mapsproduced by DGGAN are quite effective in regularizing thepose estimation model, yielding new state-of-the-art resultsin estimation accuracy, notably reducing the mean3D end-point errors (EPE) by4.7%,16.5%, and6.8%on the RHD,STB and MHP datasets, respectively.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源