论文标题
多输出预测的乐观界限
Optimistic bounds for multi-output prediction
论文作者
论文摘要
我们研究了多输出学习的挑战,目标是根据监督数据集学习矢量值函数。这包括机器学习中的一系列重要问题,包括多目标回归,多类分类和多标签分类。我们通过引入多输出损失功能的自我结合的Lipschitz条件开始分析,该条件在经典的Lipschitz条件和平滑度条件的多维类似物之间连续插值。然后,我们表明,自我限制的Lipschitz条件为多输出学习带来了乐观的界限,这是最佳选择,这是对数因素的最佳选择。该证明利用了由于Srebro,Sridharan和Tewari而引起的强大的无限不平等,并利用了当地的Rademacher复杂性。作为应用程序,我们得出了用于多类梯度提升的最新概括。
We investigate the challenge of multi-output learning, where the goal is to learn a vector-valued function based on a supervised data set. This includes a range of important problems in Machine Learning including multi-target regression, multi-class classification and multi-label classification. We begin our analysis by introducing the self-bounding Lipschitz condition for multi-output loss functions, which interpolates continuously between a classical Lipschitz condition and a multi-dimensional analogue of a smoothness condition. We then show that the self-bounding Lipschitz condition gives rise to optimistic bounds for multi-output learning, which are minimax optimal up to logarithmic factors. The proof exploits local Rademacher complexity combined with a powerful minoration inequality due to Srebro, Sridharan and Tewari. As an application we derive a state-of-the-art generalization bound for multi-class gradient boosting.