论文标题

在Polyak-łojasiewicz条件下,CV@r统计学习的随机梯度下降的嘈杂线性收敛

Noisy Linear Convergence of Stochastic Gradient Descent for CV@R Statistical Learning under Polyak-Łojasiewicz Conditions

论文作者

Kalogerias, Dionysios S.

论文摘要

有条件的价值风险($ \ mathrm {cv@r} $)是最受欢迎的风险度量之一,最近被认为是监督统计学习的性能标准,因为它与现代应用中的可取操作特征,例如安全性,公平性,分配鲁棒性,鲁棒性和预测差异稳定性有关。但是,由于其各种定义,$ \ mathrm {cv@r} $通常被认为会导致困难的优化问题,即使对于平稳且强烈凸出的损失功能也是如此。我们通过建立噪音(即固定准确性)的噪音(即固定级别)的线性融合来反驳这一说法,用于顺序$ \ mathrm {cv@r} $学习,对于一大批不一定是强烈的convex(甚至是convex)的损失功能,可以满足设定的Polyak-loajasiewiciecz inquality。该类包含所有平稳且强烈凸出的损失,证实可以在$ \ mathrm {cv@r} $标准下有效地解决古典问题,例如线性最小二乘回归,就像其风险中性版本一样。我们的结果在这种风险感知的山脊回归任务上进行了数字说明,还验证了它们在实践中的有效性。

Conditional Value-at-Risk ($\mathrm{CV@R}$) is one of the most popular measures of risk, which has been recently considered as a performance criterion in supervised statistical learning, as it is related to desirable operational features in modern applications, such as safety, fairness, distributional robustness, and prediction error stability. However, due to its variational definition, $\mathrm{CV@R}$ is commonly believed to result in difficult optimization problems, even for smooth and strongly convex loss functions. We disprove this statement by establishing noisy (i.e., fixed-accuracy) linear convergence of stochastic gradient descent for sequential $\mathrm{CV@R}$ learning, for a large class of not necessarily strongly-convex (or even convex) loss functions satisfying a set-restricted Polyak-Lojasiewicz inequality. This class contains all smooth and strongly convex losses, confirming that classical problems, such as linear least squares regression, can be solved efficiently under the $\mathrm{CV@R}$ criterion, just as their risk-neutral versions. Our results are illustrated numerically on such a risk-aware ridge regression task, also verifying their validity in practice.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源