论文标题

学习自适应采样和重建以可视化

Learning Adaptive Sampling and Reconstruction for Volume Visualization

论文作者

Weiss, Sebastian, Işık, Mustafa, Thies, Justus, Westermann, Rüdiger

论文摘要

数据可视化中的一个核心挑战是了解需要哪些数据示例来生成编码相关信息的数据集的图像。在这项工作中,我们通过学习数据,采样模式和生成的图像之间的对应关系来回答人工神经网络是否可以在何处预测数据在哪里进行采样的问题。我们引入了一种新型的神经渲染管道,该管道经过训练,端到端训练,从给定的低分辨率输入图像中产生稀疏的自适应采样结构,并从稀疏样品集中重建高分辨率图像。据我们所知,我们首次证明了与最终视觉表示相关的结构的选择可以与这些结构从这些结构中重建相同。因此,我们引入了可区分的采样和重建阶段,这些采样阶段可以仅基于最终图像的监督损失来利用背部传播。我们阐明了网络管道生成的自适应采样模式,并分析了其用于容量可视化的用途,包括等音表面和直接音量渲染。

A central challenge in data visualization is to understand which data samples are required to generate an image of a data set in which the relevant information is encoded. In this work, we make a first step towards answering the question of whether an artificial neural network can predict where to sample the data with higher or lower density, by learning of correspondences between the data, the sampling patterns and the generated images. We introduce a novel neural rendering pipeline, which is trained end-to-end to generate a sparse adaptive sampling structure from a given low-resolution input image, and reconstructs a high-resolution image from the sparse set of samples. For the first time, to the best of our knowledge, we demonstrate that the selection of structures that are relevant for the final visual representation can be jointly learned together with the reconstruction of this representation from these structures. Therefore, we introduce differentiable sampling and reconstruction stages, which can leverage back-propagation based on supervised losses solely on the final image. We shed light on the adaptive sampling patterns generated by the network pipeline and analyze its use for volume visualization including isosurface and direct volume rendering.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源