论文标题

作为弗兰克·沃尔夫(Frank-Wolfe)的提升

Boosting as Frank-Wolfe

论文作者

Mitsuboshi, Ryotaro, Hatano, Kohei, Takimoto, Eiji

论文摘要

一些增强算法(例如LPBoost,ERLPBoost和C-erlpBoost)旨在通过$ \ ell_1 $ -norm正则化解决软边距优化问题。 LPBoost在实践中迅速收敛到$ε$ - approximate解决方案,但众所周知,在最坏的情况下,$ω(m)$迭代会采取$ω(m)$迭代,其中$ m $是样本量。另一方面,保证ERLPBoost和C-erlpBoost在$ O(\ frac {1} {ε^2} \ ln \ ln \ frac {m}ν)$ itererations $ o(\ frac {1} {1} {ε^2} {ε^2} {1} {1} {1} {1} {1} {1} n $ itererations中,erlpboost和c-erlpboost可以收集到$ε$ -Appaptimate解决方案。但是,与LPBoost相比,每次迭代的计算非常高。 为了解决这个问题,我们提出了一个通用的增强方案,将Frank-Wolfe算法和任何次要算法结合在一起,并在迭代上互相切换。我们表明该方案保留与ERLPBoost和C-ErlpBoost相同的收敛保证。一个人可以合并任何二级算法以改进实践。该方案来自增强算法的统一视图,以进行软边缘优化。更具体地说,我们表明lpboost,erlpboost和c-erlpboost是弗兰克 - 沃尔夫算法的实例。在实际数据集的实验中,我们方案的实例之一利用了次级算法的更好更新,并与LPBoost相当地执行。

Some boosting algorithms, such as LPBoost, ERLPBoost, and C-ERLPBoost, aim to solve the soft margin optimization problem with the $\ell_1$-norm regularization. LPBoost rapidly converges to an $ε$-approximate solution in practice, but it is known to take $Ω(m)$ iterations in the worst case, where $m$ is the sample size. On the other hand, ERLPBoost and C-ERLPBoost are guaranteed to converge to an $ε$-approximate solution in $O(\frac{1}{ε^2} \ln \frac{m}ν)$ iterations. However, the computation per iteration is very high compared to LPBoost. To address this issue, we propose a generic boosting scheme that combines the Frank-Wolfe algorithm and any secondary algorithm and switches one to the other iteratively. We show that the scheme retains the same convergence guarantee as ERLPBoost and C-ERLPBoost. One can incorporate any secondary algorithm to improve in practice. This scheme comes from a unified view of boosting algorithms for soft margin optimization. More specifically, we show that LPBoost, ERLPBoost, and C-ERLPBoost are instances of the Frank-Wolfe algorithm. In experiments on real datasets, one of the instances of our scheme exploits the better updates of the secondary algorithm and performs comparably with LPBoost.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源