论文标题
KASAM:用于功能近似的样条添加模型
KASAM: Spline Additive Models for Function Approximation
论文作者
论文摘要
由于引入新概念时,由于灾难性的遗忘和过去的概念快速学习,神经网络因无法进行持续学习而受到批评。专门设计的模型和培训技术可以缓解灾难性的遗忘。本文概述了一种新型的样条添加剂模型(SAM)。 SAM在许多实际任务中具有足够的表达能力表现出内在的记忆保留率,但不是通用函数近似值。 SAM用Kolmogorov-Arnold代表定理扩展到一个新型的通用函数近似器,称为Kolmogorov-Arnold条形二线添加剂模型-KASAM。在分析和经验上说明了SAM和KASAM的记忆保留,表达能力和局限性。山姆表现出强大但不完美的记忆保留,在顺序学习任务中有重叠干扰的小区域。卡萨姆对灾难性遗忘表现出更大的敏感性。 KASAM与伪式训练技术结合使用,在回归任务和记忆保留方面表现出卓越的表现。
Neural networks have been criticised for their inability to perform continual learning due to catastrophic forgetting and rapid unlearning of a past concept when a new concept is introduced. Catastrophic forgetting can be alleviated by specifically designed models and training techniques. This paper outlines a novel Spline Additive Model (SAM). SAM exhibits intrinsic memory retention with sufficient expressive power for many practical tasks, but is not a universal function approximator. SAM is extended with the Kolmogorov-Arnold representation theorem to a novel universal function approximator, called the Kolmogorov-Arnold Spline Additive Model - KASAM. The memory retention, expressive power and limitations of SAM and KASAM are illustrated analytically and empirically. SAM exhibited robust but imperfect memory retention, with small regions of overlapping interference in sequential learning tasks. KASAM exhibited greater susceptibility to catastrophic forgetting. KASAM in combination with pseudo-rehearsal training techniques exhibited superior performance in regression tasks and memory retention.