论文标题
神经网络中隐含定义的层
Implicitly Defined Layers in Neural Networks
论文作者
论文摘要
在多层馈电神经网络的常规配方中,单个层通常由显式功能定义。在本文中,我们证明,在神经网络\ emph {remph}中定义单个层提供了对标准显式的表示更丰富的表示形式,从而使端到端可训练的体系结构能够更广泛。我们提出了一个隐式定义层的一般框架,其中可以通过隐式函数定理来解决此类层的许多理论分析。我们还展示了如何将隐式定义的图层无缝纳入现有的机器学习库中。特别是在当前的自动分化技术中用于基于反向传播的培训。最后,我们在许多不同的示例问题上证明了我们提出的方法的多功能性和相关性以及有希望的结果。
In conventional formulations of multilayer feedforward neural networks, the individual layers are customarily defined by explicit functions. In this paper we demonstrate that defining individual layers in a neural network \emph{implicitly} provide much richer representations over the standard explicit one, consequently enabling a vastly broader class of end-to-end trainable architectures. We present a general framework of implicitly defined layers, where much of the theoretical analysis of such layers can be addressed through the implicit function theorem. We also show how implicitly defined layers can be seamlessly incorporated into existing machine learning libraries. In particular with respect to current automatic differentiation techniques for use in backpropagation based training. Finally, we demonstrate the versatility and relevance of our proposed approach on a number of diverse example problems with promising results.