论文标题

引脚:多尺度神经表示的渐进隐式网络

PINs: Progressive Implicit Networks for Multi-Scale Neural Representations

论文作者

Landgraf, Zoe, Hornung, Alexander Sorkine, Cabral, Ricardo Silveira

论文摘要

事实证明,多层感知器(MLP)与输入的高维投影相结合,通常称为\ textit {位置{位置编码}。但是,频谱频谱的场景仍然是一个挑战:选择高频进行位置编码会引入低结构区域的噪声,而低频导致详细区域的拟合不佳。为了解决这个问题,我们提出了一个渐进的位置编码,将分层MLP结构暴露于频率编码的增量集。我们的模型可以准确地使用广泛的频带重建场景,并以详细信息\ textit {而无需明确的每个级别监督}学习场景表示。该体系结构是模块化的:每个级别都编码一个连续的隐式表示,可以分别利用其各自的分辨率,这意味着一个较小的网络来进行更粗糙的重建。与基线相比,几个2D和3D数据集的实验显示了重建精度,代表性能力和训练速度的提高。

Multi-layer perceptrons (MLP) have proven to be effective scene encoders when combined with higher-dimensional projections of the input, commonly referred to as \textit{positional encoding}. However, scenes with a wide frequency spectrum remain a challenge: choosing high frequencies for positional encoding introduces noise in low structure areas, while low frequencies result in poor fitting of detailed regions. To address this, we propose a progressive positional encoding, exposing a hierarchical MLP structure to incremental sets of frequency encodings. Our model accurately reconstructs scenes with wide frequency bands and learns a scene representation at progressive level of detail \textit{without explicit per-level supervision}. The architecture is modular: each level encodes a continuous implicit representation that can be leveraged separately for its respective resolution, meaning a smaller network for coarser reconstructions. Experiments on several 2D and 3D datasets show improvements in reconstruction accuracy, representational capacity and training speed compared to baselines.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源