论文标题
LSTM语言模型中的多时间尺度表示学习
Multi-timescale Representation Learning in LSTM Language Models
论文作者
论文摘要
语言模型必须捕获从很短到很长的时间标准之间单词之间的统计依赖性。早期的工作表明,自然语言的依赖性往往会根据权力法与单词之间的距离衰减。但是,尚不清楚如何将这些知识用于分析或设计神经网络语言模型。在这项工作中,我们得出了一个理论,讲述了如何长期短期记忆(LSTM)语言模型中的记忆门控机制如何捕获电源定律的衰减。我们发现,由忘记门偏置确定的LSTM内的单位时间尺度应遵循反伽马分布。然后,实验表明,接受过自然英语文本训练的LSTM语言模型学会近似于这种理论分布。此外,我们发现,在训练期间,明确对模型上的理论分布施加了理论分布,从而使语言模型更加困惑,并有特殊的改进来预测低频(稀有)单词。此外,明确的多时间尺度模型通过具有不同时间尺度的单位选择性地路由有关不同类型单词的信息,从而有可能改善模型的解释性。这些结果证明了在语言模型中对记忆和时间尺度进行仔细的,理论上动机分析的重要性。
Language models must capture statistical dependencies between words at timescales ranging from very short to very long. Earlier work has demonstrated that dependencies in natural language tend to decay with distance between words according to a power law. However, it is unclear how this knowledge can be used for analyzing or designing neural network language models. In this work, we derived a theory for how the memory gating mechanism in long short-term memory (LSTM) language models can capture power law decay. We found that unit timescales within an LSTM, which are determined by the forget gate bias, should follow an Inverse Gamma distribution. Experiments then showed that LSTM language models trained on natural English text learn to approximate this theoretical distribution. Further, we found that explicitly imposing the theoretical distribution upon the model during training yielded better language model perplexity overall, with particular improvements for predicting low-frequency (rare) words. Moreover, the explicit multi-timescale model selectively routes information about different types of words through units with different timescales, potentially improving model interpretability. These results demonstrate the importance of careful, theoretically-motivated analysis of memory and timescale in language models.