论文标题

如何剖析muppet:变压器嵌入空间的结构

How to Dissect a Muppet: The Structure of Transformer Embedding Spaces

论文作者

Mickus, Timothee, Paperno, Denis, Constant, Mathieu

论文摘要

基于变压器体系结构的预处理的嵌入使NLP社区风暴。我们表明,它们可以在数学上被重新构架为矢量因素的总和,并展示了如何使用此重塑来研究每个组件的影响。我们提供的证据表明,多头的注意力和馈送方面在所有下游应用中均不同样有用,以及列出对整体嵌入空间的影响的定量概述。这种方法使我们能够与以前的广泛研究联系起来,从矢量空间各向异性到注意力重量。

Pretrained embeddings based on the Transformer architecture have taken the NLP community by storm. We show that they can mathematically be reframed as a sum of vector factors and showcase how to use this reframing to study the impact of each component. We provide evidence that multi-head attentions and feed-forwards are not equally useful in all downstream applications, as well as a quantitative overview of the effects of finetuning on the overall embedding space. This approach allows us to draw connections to a wide range of previous studies, from vector space anisotropy to attention weights.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源