论文标题
保险库:在社交媒体上增加视觉和语言变压器以进行情感分类
VAuLT: Augmenting the Vision-and-Language Transformer for Sentiment Classification on Social Media
论文作者
论文摘要
我们提出了视觉和启发性语言变压器(Vault)。 Vault是流行的视觉和语言变压器(VILT)的扩展,并提高了视觉和语言(VL)任务的性能,该任务涉及比图像标题更复杂的文本输入,同时对训练和推理效率的影响最小。重要的是,Vilt可以通过使用贴片的线性投影而不是对象检测器来编码图像来实现VL任务的有效训练和推断。但是,它是在字幕数据集上鉴定的,在该数据集上,语言输入简单,文字和描述性,因此缺乏语言多样性。因此,当使用野外多媒体数据(例如多模式社交媒体数据)时,与字幕语言数据以及任务的多样性有了显着转变。我们确实发现证据表明缺乏侮辱的语言能力。保险库的关键洞察力和新颖性是传播大型语言模型(LM)的输出表示形式,例如BERT到Vilt的语言输入。我们表明,LM和VILT的联合培训可以产生高达20%的Vilt,并在涉及更丰富的语言输入和情感结构的VL任务上实现最先进或可比性的性能,例如在Twitter-2015和Twitter-2017和Twitter-2017中以目标性情绪分类以及MVSA-SENIMENTICCIENT中的目标分类,以及MVSA-SENIMENTICCIEFFICENCIED。我们的代码可在https://github.com/gchochla/vault上找到。
We propose the Vision-and-Augmented-Language Transformer (VAuLT). VAuLT is an extension of the popular Vision-and-Language Transformer (ViLT), and improves performance on vision-and-language (VL) tasks that involve more complex text inputs than image captions while having minimal impact on training and inference efficiency. ViLT, importantly, enables efficient training and inference in VL tasks, achieved by encoding images using a linear projection of patches instead of an object detector. However, it is pretrained on captioning datasets, where the language input is simple, literal, and descriptive, therefore lacking linguistic diversity. So, when working with multimedia data in the wild, such as multimodal social media data, there is a notable shift from captioning language data, as well as diversity of tasks. We indeed find evidence that the language capacity of ViLT is lacking. The key insight and novelty of VAuLT is to propagate the output representations of a large language model (LM) like BERT to the language input of ViLT. We show that joint training of the LM and ViLT can yield relative improvements up to 20% over ViLT and achieve state-of-the-art or comparable performance on VL tasks involving richer language inputs and affective constructs, such as for Target-Oriented Sentiment Classification in TWITTER-2015 and TWITTER-2017, and Sentiment Classification in MVSA-Single and MVSA-Multiple. Our code is available at https://github.com/gchochla/VAuLT.