论文标题

通过嘈杂的标签学习适应性损失

Learning Adaptive Loss for Robust Learning with Noisy Labels

论文作者

Shu, Jun, Zhao, Qian, Chen, Keyu, Xu, Zongben, Meng, Deyu

论文摘要

稳健的损失最小化是在嘈杂标签上处理强大学习问题的重要策略。但是,当前的强大损失功能不可避免地涉及通过交叉验证手动或启发性调整的超参数(S),这使得它们在实践中通常很难应用。此外,损失带来的非跨性别以及复杂的网络体系结构使其很容易陷入具有较差概括能力的意外解决方案中。为了解决上述问题,我们提出了一种能够在强大的损失功能中自适应学习超参数的元学习方法。具体而言,通过我们方法中强大的损耗高参数和网络参数之间的相互改善,它们俩都可以同时进行精心学习和协调,从而具有良好的概括能力。与常规的超参数调谐策略相比,即使使用经过精心调谐的超级参数,也可以将四种SOTA强大的损失函数集成到我们的算法中,并且全面的实验证实了所提出方法在其准确性和概括性能中的一般可用性和有效性。

Robust loss minimization is an important strategy for handling robust learning issue on noisy labels. Current robust loss functions, however, inevitably involve hyperparameter(s) to be tuned, manually or heuristically through cross validation, which makes them fairly hard to be generally applied in practice. Besides, the non-convexity brought by the loss as well as the complicated network architecture makes it easily trapped into an unexpected solution with poor generalization capability. To address above issues, we propose a meta-learning method capable of adaptively learning hyperparameter in robust loss functions. Specifically, through mutual amelioration between robust loss hyperparameter and network parameters in our method, both of them can be simultaneously finely learned and coordinated to attain solutions with good generalization capability. Four kinds of SOTA robust loss functions are attempted to be integrated into our algorithm, and comprehensive experiments substantiate the general availability and effectiveness of the proposed method in both its accuracy and generalization performance, as compared with conventional hyperparameter tuning strategy, even with carefully tuned hyperparameters.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源