论文标题
内部表示动态和几何形状
Internal representation dynamics and geometry in recurrent neural networks
论文作者
论文摘要
长期以来已经建立了复发性神经网络(RNN)在处理顺序数据方面的效率。但是,与深层和卷积网络不同,我们可以将某个特征的识别归因于每一层的识别,目前尚不清楚单个经常性步骤或层的“子任务”。我们的工作试图通过分析网络的动态及其隐藏状态的几何特性来阐明香草RNN如何实现简单的分类任务。我们发现,早期内部表示是对数据的真实标签的回忆性,但是该信息无法直接访问输出层。此外,即使没有提供其他任务相关的信息,网络的动态和序列长度对于纠正分类至关重要。
The efficiency of recurrent neural networks (RNNs) in dealing with sequential data has long been established. However, unlike deep, and convolution networks where we can attribute the recognition of a certain feature to every layer, it is unclear what "sub-task" a single recurrent step or layer accomplishes. Our work seeks to shed light onto how a vanilla RNN implements a simple classification task by analysing the dynamics of the network and the geometric properties of its hidden states. We find that early internal representations are evocative of the real labels of the data but this information is not directly accessible to the output layer. Furthermore the network's dynamics and the sequence length are both critical to correct classifications even when there is no additional task relevant information provided.