论文标题

人类在新观点中的神经渲染,并从单眼视频中摆姿势

Neural Rendering of Humans in Novel View and Pose from Monocular Video

论文作者

Wang, Tiantian, Sarafianos, Nikolaos, Yang, Ming-Hsuan, Tung, Tony

论文摘要

我们介绍了一种新方法,该方法在新颖的观点下产生了现实的人类,并给定一个单眼视频作为输入。尽管最近在这个主题上取得了重大进展,但通过几种方法探索了动态场景中共享的规范神经光辉领域,学习一个未见姿势的用户控制模型仍然是一项艰巨的任务。为了解决这个问题,我们引入了一种有效的方法来a)整合几个框架的观察值,b)在每个单独的框架处编码外观。我们通过利用建模身体形状的人类姿势以及部分覆盖人类作为输入的点云来实现这一目标。我们的方法同时学习了一组共享的潜在代码,该代码固定在几个框架中的人类姿势上,以及一个固定在每个帧及其预测深度生成的不完整点云上的外观依赖代码。前人姿势的代码模型表演者的形状,而后者基于云的代码可以预测在看不见的姿势下缺少结构的细节和原因。为了进一步恢复查询框架中的不可识别区域,我们采用颞变压器来整合查询帧中的点的特征,并从自动选择的钥匙帧中跟踪的身体点。包括ZJU-MOCAP在内的不同数据集的各种动态人类序列的实验表明,我们的方法在看不见的姿势下明显优于现有的方法,并以单眼视频为输入。

We introduce a new method that generates photo-realistic humans under novel views and poses given a monocular video as input. Despite the significant progress recently on this topic, with several methods exploring shared canonical neural radiance fields in dynamic scene scenarios, learning a user-controlled model for unseen poses remains a challenging task. To tackle this problem, we introduce an effective method to a) integrate observations across several frames and b) encode the appearance at each individual frame. We accomplish this by utilizing both the human pose that models the body shape as well as point clouds that partially cover the human as input. Our approach simultaneously learns a shared set of latent codes anchored to the human pose among several frames, and an appearance-dependent code anchored to incomplete point clouds generated by each frame and its predicted depth. The former human pose-based code models the shape of the performer whereas the latter point cloud-based code predicts fine-level details and reasons about missing structures at the unseen poses. To further recover non-visible regions in query frames, we employ a temporal transformer to integrate features of points in query frames and tracked body points from automatically-selected key frames. Experiments on various sequences of dynamic humans from different datasets including ZJU-MoCap show that our method significantly outperforms existing approaches under unseen poses and novel views given monocular videos as input.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源