论文标题
离散化不变网络,用于学习神经领域之间的地图
Discretization Invariant Networks for Learning Maps between Neural Fields
论文作者
论文摘要
随着连续数据的强大表示形式的出现,需要离散化的不变学习:一种在连续域上函数之间学习图的方法,而不对函数的采样方式敏感。我们提出了一个新的框架,用于理解和设计离散不变的神经网络(DI-NET),该框架概括了许多离散网络,例如卷积神经网络以及连续网络,例如神经操作员。我们的分析在不同有限的离散化下建立了模型输出偏差的上限,并突出了点集差异在表征此类界限中的核心作用。这种洞察力导致了通过准蒙特·卡洛(Quasi-Monte Carlo)采样的数值集成驱动的神经网络家族,并具有低差异的离散化。我们通过构造证明,在可集成的函数空间之间,DI-NETS普遍近似一类图,并表明离散化不变性还通过此类模型描述了反向传播。应用于神经领域,卷积Di-Nets可以在各种离散化下学习对视觉数据进行分类和细分视觉数据,有时在测试时概括为新的离散类型。代码:https://github.com/clintonjwang/di-net。
With the emergence of powerful representations of continuous data in the form of neural fields, there is a need for discretization invariant learning: an approach for learning maps between functions on continuous domains without being sensitive to how the function is sampled. We present a new framework for understanding and designing discretization invariant neural networks (DI-Nets), which generalizes many discrete networks such as convolutional neural networks as well as continuous networks such as neural operators. Our analysis establishes upper bounds on the deviation in model outputs under different finite discretizations, and highlights the central role of point set discrepancy in characterizing such bounds. This insight leads to the design of a family of neural networks driven by numerical integration via quasi-Monte Carlo sampling with discretizations of low discrepancy. We prove by construction that DI-Nets universally approximate a large class of maps between integrable function spaces, and show that discretization invariance also describes backpropagation through such models. Applied to neural fields, convolutional DI-Nets can learn to classify and segment visual data under various discretizations, and sometimes generalize to new types of discretizations at test time. Code: https://github.com/clintonjwang/DI-net.