论文标题

隐式几何正规化用于学习形状

Implicit Geometric Regularization for Learning Shapes

论文作者

Gropp, Amos, Yariv, Lior, Haim, Niv, Atzmon, Matan, Lipman, Yaron

论文摘要

将形状表示为神经网络的水平集已被证明对不同的形状分析和重建任务有用。到目前为止,使用两者计算此类表示:(i)预先计算的隐式形状表示;或(ii)在神经水平集中明确定义的损失函数。在本文中,我们提供了一种新的范式,用于直接从原始数据(即,有或没有正常信息)中直接从原始数据(即点云)中计算高保真性隐含的神经表示形式。我们观察到,一个相当简单的损失函数,鼓励神经网络在输入点云上消失并具有单位规范梯度,具有隐式的几何正则化属性,有利于光滑和自然的零级设置表面,避免使用不良的零损坏溶液。我们为线性案例提供了该特性的理论分析,并表明,实际上,我们的方法导致与以前的方法相比,具有更高级别的详细信息和忠诚度的最先进的神经表示。

Representing shapes as level sets of neural networks has been recently proved to be useful for different shape analysis and reconstruction tasks. So far, such representations were computed using either: (i) pre-computed implicit shape representations; or (ii) loss functions explicitly defined over the neural level sets. In this paper we offer a new paradigm for computing high fidelity implicit neural representations directly from raw data (i.e., point clouds, with or without normal information). We observe that a rather simple loss function, encouraging the neural network to vanish on the input point cloud and to have a unit norm gradient, possesses an implicit geometric regularization property that favors smooth and natural zero level set surfaces, avoiding bad zero-loss solutions. We provide a theoretical analysis of this property for the linear case, and show that, in practice, our method leads to state of the art implicit neural representations with higher level-of-details and fidelity compared to previous methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源