论文标题

NyStröm凸损失函数的方法

The Nyström method for convex loss functions

论文作者

Della Vecchia, Andrea, De Vito, Ernesto, Mourtada, Jaouad, Rosasco, Lorenzo

论文摘要

我们研究了经典经验风险最小化的扩展,其中假设空间由给定的希尔伯特空间内的随机子空间组成。具体而言,我们检查了NyStröm方法,其中子空间由数据的随机子集定义。这种方法恢复了内核方法​​中使用的Nyström近似值作为特定情况。使用随机子空间自然会带来计算优势,但一个关键问题是它是否损害了学习准确性。最近,已经探索了统计和计算之间的权衡方面的平方损失和自我符合损失,例如逻辑损失。在本文中,我们将这些分析扩展到一般的凸Lipschitz损失,这些损失可能缺乏平滑度,例如支持向量机中使用的铰链损耗。我们的主要结果表明,在不牺牲学习绩效的情况下可以实现计算收益的各种情况。当专门从事平滑损失功能时,我们的分析将恢复大多数以前的结果。此外,它允许考虑分类问题,并将替代风险界限转化为分类错误范围。实际上,这有机会将NyStröm近似值的效果与不同的损耗函数(例如铰链或正方形损耗)进行比较。

We investigate an extension of classical empirical risk minimization, where the hypothesis space consists of a random subspace within a given Hilbert space. Specifically, we examine the Nyström method where the subspaces are defined by a random subset of the data. This approach recovers Nyström approximations used in kernel methods as a specific case. Using random subspaces naturally leads to computational advantages, but a key question is whether it compromises the learning accuracy. Recently, the tradeoffs between statistics and computation have been explored for the square loss and self-concordant losses, such as the logistic loss. In this paper, we extend these analyses to general convex Lipschitz losses, which may lack smoothness, such as the hinge loss used in support vector machines. Our main results show the existence of various scenarios where computational gains can be achieved without sacrificing learning performance. When specialized to smooth loss functions, our analysis recovers most previous results. Moreover, it allows to consider classification problems and translate the surrogate risk bounds into classification error bounds. Indeed, this gives the opportunity to compare the effect of Nyström approximations when combined with different loss functions such as the hinge or the square loss.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源