论文标题
非线性功能输出回归:词典方法
Nonlinear Functional Output Regression: a Dictionary Approach
论文作者
论文摘要
为了解决功能输出回归,我们引入了投影学习(PL),这是一种基于词典的新方法,该方法学会了预测在词典上扩展的功能,同时最大程度地减少基于功能损失的经验风险。 PL使使用非正交词典成为可能,然后可以与词典学习结合;因此,它比依靠矢量损失的基于扩展的方法更灵活。该通用方法是通过重现矢量值函数的内核希尔伯特空间来实例化的,它是基于内核的投影学习(KPL)。对于功能平方损耗,提出了两个封闭形式的估计器,一个用于完全观察到的输出函数,另一个用于部分观察到的功能。两者在理论上都通过多余的风险分析来支持。然后,在基于可区分的地面损失的更一般的积分损失设置中,使用对完全和部分观察到的输出功能的一阶优化实现KPL。最终,在玩具数据集上突出显示了所提出的算法的几个鲁棒性方面。对两个实际数据集的研究表明,与其他非线性方法相比,它们具有竞争力。值得注意的是,使用正方形损失和学习的词典,KPL在计算成本和表演之间取决了一个特别有吸引力的权衡。
To address functional-output regression, we introduce projection learning (PL), a novel dictionary-based approach that learns to predict a function that is expanded on a dictionary while minimizing an empirical risk based on a functional loss. PL makes it possible to use non orthogonal dictionaries and can then be combined with dictionary learning; it is thus much more flexible than expansion-based approaches relying on vectorial losses. This general method is instantiated with reproducing kernel Hilbert spaces of vector-valued functions as kernel-based projection learning (KPL). For the functional square loss, two closed-form estimators are proposed, one for fully observed output functions and the other for partially observed ones. Both are backed theoretically by an excess risk analysis. Then, in the more general setting of integral losses based on differentiable ground losses, KPL is implemented using first-order optimization for both fully and partially observed output functions. Eventually, several robustness aspects of the proposed algorithms are highlighted on a toy dataset; and a study on two real datasets shows that they are competitive compared to other nonlinear approaches. Notably, using the square loss and a learnt dictionary, KPL enjoys a particularily attractive trade-off between computational cost and performances.