论文标题
终身学习过程:自我内存监督和动态增长的网络
Lifelong Learning Process: Self-Memory Supervising and Dynamically Growing Networks
论文作者
论文摘要
从童年到青年,人类逐渐了解世界。但是对于神经网络,这个增长过程似乎很困难。被困在灾难性的遗忘中,现有研究人员将所有类别的数据馈送到整个训练过程中保持相同结构的神经网络。我们将这个培训过程与人类的学习模式进行了比较,并发现了两个主要的冲突。在本文中,我们研究了如何基于条件变异自动编码器(CVAE)模型解决生成模型的冲突。为了解决不连续的冲突,我们应用内存播放策略来维护模型对不可见的使用类别的识别和生成能力。我们将传统的单向CVAE扩展到循环模式,以更好地完成内存播放策略。为了解决“死”结构冲突,我们重写了CVAE公式,然后能够对CVAE模型中不同部分的功能做出新颖的解释。基于新的理解,我们找到了在新类别培训时动态扩展网络结构的方法。我们验证方法对MNIST和时尚MNIST的有效性,并显示一些非常内心的结果。
From childhood to youth, human gradually come to know the world. But for neural networks, this growing process seems difficult. Trapped in catastrophic forgetting, current researchers feed data of all categories to a neural network which remains the same structure in the whole training process. We compare this training process with human learing patterns, and find two major conflicts. In this paper, we study how to solve these conflicts on generative models based on the conditional variational autoencoder(CVAE) model. To solve the uncontinuous conflict, we apply memory playback strategy to maintain the model's recognizing and generating ability on invisible used categories. And we extend the traditional one-way CVAE to a circulatory mode to better accomplish memory playback strategy. To solve the `dead' structure conflict, we rewrite the CVAE formula then are able to make a novel interpretation about the funtions of different parts in CVAE models. Based on the new understanding, we find ways to dynamically extend the network structure when training on new categories. We verify the effectiveness of our methods on MNIST and Fashion MNIST and display some very insteresting results.