论文标题

具有统一条件模型的自我训练视觉语言berts

Self-Training Vision Language BERTs with a Unified Conditional Model

论文作者

Yang, Xiaofeng, Lv, Fengmao, Liu, Fayao, Lin, Guosheng

论文摘要

自然语言Berts以一种自我监督的方式接受语言语料库的训练。与自然语言Berts不同,视觉语言Berts需要配对的数据才能训练,这限制了VL-Bert预处理的规模。我们提出了一种自我训练的方法,该方法允许从未标记的图像数据中培训VL-Berts。所提出的方法从我们的统一条件模型开始 - 一种视觉语言BERT模型,可以执行零拍摄的条件生成。在不同的条件下,统一的条件模型可以生成字幕,密集的字幕甚至问题。我们使用标记的图像数据来训练教师模型,并使用训练有素的模型在未标记的图像数据上生成伪字幕。然后,我们将标记的数据和伪标记的数据组合在一起以培训学生模型。通过将学生模型作为新老师进行迭代。通过使用所提出的自我训练方法和仅300K未标记的额外数据,与经过300万个额外图像数据训练的类似型号的模型相比,我们能够获得竞争性甚至更好的性能。

Natural language BERTs are trained with language corpus in a self-supervised manner. Unlike natural language BERTs, vision language BERTs need paired data to train, which restricts the scale of VL-BERT pretraining. We propose a self-training approach that allows training VL-BERTs from unlabeled image data. The proposed method starts with our unified conditional model -- a vision language BERT model that can perform zero-shot conditional generation. Given different conditions, the unified conditional model can generate captions, dense captions, and even questions. We use the labeled image data to train a teacher model and use the trained model to generate pseudo captions on unlabeled image data. We then combine the labeled data and pseudo labeled data to train a student model. The process is iterated by putting the student model as a new teacher. By using the proposed self-training approach and only 300k unlabeled extra data, we are able to get competitive or even better performances compared to the models of similar model size trained with 3 million extra image data.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源