论文标题
基于噪声的增强效果的渲染
Noise-based Enhancement for Foveated Rendering
论文作者
论文摘要
人类对空间细节的视觉敏感性向周围倾向。新型的图像合成技术,所谓的foveated渲染,利用这一观察结果并减少合成图像的空间分辨率为外围,避免综合高空间频率的细节,这些细节成本成本高昂,但观看者没有被观察到。但是,当代技术在必须复制的空间频率范围和可以省略的空间频率范围之间没有明确区分。对于给定的偏心率,有多种频率可检测但无法解析。虽然不需要这些频率的准确再现,但如果完全省略,观察者可以检测到它们的缺席。我们使用此观察结果来提高现有的浮动渲染技术的性能。我们证明,该特定频率范围可以被仔细调整为图像内容和人类感知的程序噪声有效地代替。因此,这些频率在渲染过程中不必合成,从而使其更具侵略性的效果,并且可以在较便宜的后处理步骤中被产生的噪声代替,从而改善了渲染系统的性能。我们的主要贡献是一种受感知启发的技术,用于推导增强及其校准所需的噪声参数。该方法在渲染输出方面运行,并以4K分辨率以超过200fps的速率运行,使其适合与VR和AR设备的实时FoVeated渲染系统集成。我们验证结果并将其与用户实验中现有的对比度增强技术进行比较。
Human visual sensitivity to spatial details declines towards the periphery. Novel image synthesis techniques, so-called foveated rendering, exploit this observation and reduce the spatial resolution of synthesized images for the periphery, avoiding the synthesis of high-spatial-frequency details that are costly to generate but not perceived by a viewer. However, contemporary techniques do not make a clear distinction between the range of spatial frequencies that must be reproduced and those that can be omitted. For a given eccentricity, there is a range of frequencies that are detectable but not resolvable. While the accurate reproduction of these frequencies is not required, an observer can detect their absence if completely omitted. We use this observation to improve the performance of existing foveated rendering techniques. We demonstrate that this specific range of frequencies can be efficiently replaced with procedural noise whose parameters are carefully tuned to image content and human perception. Consequently, these frequencies do not have to be synthesized during rendering, allowing more aggressive foveation, and they can be replaced by noise generated in a less expensive post-processing step, leading to improved performance of the rendering system. Our main contribution is a perceptually-inspired technique for deriving the parameters of the noise required for the enhancement and its calibration. The method operates on rendering output and runs at rates exceeding 200FPS at 4K resolution, making it suitable for integration with real-time foveated rendering systems for VR and AR devices. We validate our results and compare them to the existing contrast enhancement technique in user experiments.