论文标题

从内核方法到神经网络:一个统一的变化公式

From Kernel Methods to Neural Networks: A Unifying Variational Formulation

论文作者

Unser, Michael

论文摘要

数据保真术语和加性正则化功能的最小化为监督学习带来了强大的框架。在本文中,我们提出了一个统一的正则功能,该功能取决于操作员和通用的ra域标准。我们建立了最小化的存在,并在非常温和的假设下给出了溶液的参数形式。当规范是希尔伯特人时,提出的配方会产生涉及径向基础功能的解决方案,并且与机器学习的经典方法兼容。相比之下,对于总变化标准,解决方案采用具有正则化操作员确定的激活函数的两层神经网络的形式。特别是,我们通过让操作员为拉普拉斯(Laplacian)来检索流行的Relu网络。我们还表征了中间正规化规范的解决方案$ \ | \ cdot \ | = \ | \ | \ cdot \ | _ {l_p} $带有$ p \ in(1,2] $的解决方案。我们的框架为广泛的正规化操作员或等效的范围(包括各种各样的杂物)提供了普遍近似的普遍近似值(包括广泛的综合性),包括各种各样的疾病(包括各种各样的疾病),包括各种各样的繁殖(包括众所周知的动物),这些案例(包括各种各样的出现)(包括各种各样的出现)(包括各种各样的出现),从多项式上。

The minimization of a data-fidelity term and an additive regularization functional gives rise to a powerful framework for supervised learning. In this paper, we present a unifying regularization functional that depends on an operator and on a generic Radon-domain norm. We establish the existence of a minimizer and give the parametric form of the solution(s) under very mild assumptions. When the norm is Hilbertian, the proposed formulation yields a solution that involves radial-basis functions and is compatible with the classical methods of machine learning. By contrast, for the total-variation norm, the solution takes the form of a two-layer neural network with an activation function that is determined by the regularization operator. In particular, we retrieve the popular ReLU networks by letting the operator be the Laplacian. We also characterize the solution for the intermediate regularization norms $\|\cdot\|=\|\cdot\|_{L_p}$ with $p\in(1,2]$. Our framework offers guarantees of universal approximation for a broad family of regularization operators or, equivalently, for a wide variety of shallow neural networks, including the cases (such as ReLU) where the activation function is increasing polynomially. It also explains the favorable role of bias and skip connections in neural architectures.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源