论文标题

除了有监督的持续学习之外:评论

Beyond Supervised Continual Learning: a Review

论文作者

Bagus, Benedikt, Gepperth, Alexander, Lesort, Timothée

论文摘要

持续学习(CL,有时也称为增量学习)是机器学习的一种味道,在该口感中,固定数据分布的通常假设被放松或省略。当天然应用时,例如CL问题中的DNNS时,数据分布的变化会导致所谓的灾难性遗忘(CF)效应:突然丧失了先前的知识。尽管近年来已经为启用CL做出了许多重大贡献,但大多数作品都解决了受监督的(分类)问题。本文回顾了在其他环境中研究CL的文献,例如通过减少监督,完全无监督的学习和强化学习的学习。除了提出一个简单的模式以分类CL方法W.R.T.他们的自主性和监督水平,我们讨论了与每种设置相关的具体挑战以及对CL领域的潜在贡献。

Continual Learning (CL, sometimes also termed incremental learning) is a flavor of machine learning where the usual assumption of stationary data distribution is relaxed or omitted. When naively applying, e.g., DNNs in CL problems, changes in the data distribution can cause the so-called catastrophic forgetting (CF) effect: an abrupt loss of previous knowledge. Although many significant contributions to enabling CL have been made in recent years, most works address supervised (classification) problems. This article reviews literature that study CL in other settings, such as learning with reduced supervision, fully unsupervised learning, and reinforcement learning. Besides proposing a simple schema for classifying CL approaches w.r.t. their level of autonomy and supervision, we discuss the specific challenges associated with each setting and the potential contributions to the field of CL in general.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源