论文标题
轻型阶段超分辨率:连续的高频重新重新重新确定
Light Stage Super-Resolution: Continuous High-Frequency Relighting
论文作者
论文摘要
在过去的二十年中,光阶段已在计算机图形中广泛使用,主要是为了使人脸重新获得。通过在不同的光源下捕获人类受试者的外观,就可以获得该主题的光传输矩阵,从而在新颖的环境中实现基于图像的重新获得。但是,由于舞台上的灯光数量有限,光传输矩阵仅表示整个球体上的稀疏采样。结果,用点光或定向源重新确定与舞台上的一个光完全一致的定向源需要插值并重新采样与附近灯光相对应的图像,这会导致鬼影,使人的阴影,异常的镜面和其他工件。为了改善这些人工制品并在任意高频照明下产生更好的结果,本文提出了一种基于学习的解决方案,用于从光阶段进行的人脸扫描的“超分辨率”。给定任意的“查询”光方向,我们的方法汇总了与舞台上相邻灯相对应的捕获的图像,并使用神经网络合成了面部的渲染,似乎在查询位置被“虚拟”光源照亮。该神经网络必须规避用于培训的光阶段数据的固有混叠和规律性,我们通过在我们的网络中使用正规化的传统插值方法来完成。我们博学的模型能够为表现出逼真的阴影和镜面亮点的任意光方向生成渲染,并能够跨越各种主题。
The light stage has been widely used in computer graphics for the past two decades, primarily to enable the relighting of human faces. By capturing the appearance of the human subject under different light sources, one obtains the light transport matrix of that subject, which enables image-based relighting in novel environments. However, due to the finite number of lights in the stage, the light transport matrix only represents a sparse sampling on the entire sphere. As a consequence, relighting the subject with a point light or a directional source that does not coincide exactly with one of the lights in the stage requires interpolation and resampling the images corresponding to nearby lights, and this leads to ghosting shadows, aliased specularities, and other artifacts. To ameliorate these artifacts and produce better results under arbitrary high-frequency lighting, this paper proposes a learning-based solution for the "super-resolution" of scans of human faces taken from a light stage. Given an arbitrary "query" light direction, our method aggregates the captured images corresponding to neighboring lights in the stage, and uses a neural network to synthesize a rendering of the face that appears to be illuminated by a "virtual" light source at the query location. This neural network must circumvent the inherent aliasing and regularity of the light stage data that was used for training, which we accomplish through the use of regularized traditional interpolation methods within our network. Our learned model is able to produce renderings for arbitrary light directions that exhibit realistic shadows and specular highlights, and is able to generalize across a wide variety of subjects.