论文标题
精细元素语义对齐视觉语言预训练
Fine-Grained Semantically Aligned Vision-Language Pre-Training
论文作者
论文摘要
大规模的视觉预训练在各种下游任务中都表现出了令人印象深刻的进步。现有方法主要通过图像和文本的全局表示形式的相似性或对图像和文本特征的高级交叉模式关注来对跨模式对齐进行建模。但是,由于只有全局图像文本对齐信息,因此他们无法明确学习视觉区域和文本短语之间的细粒语义对齐。在本文中,我们介绍了Loupe,这是一种精细的语义一致性视觉语言预训练框架,该框架从新颖的游戏理论互动的角度来学习细粒的语义对齐。为了有效地计算游戏理论相互作用,我们进一步提出了不确定性感知的神经Shapley交互学习模块。实验表明,Loupe在各种视力语言任务上实现了最先进的表现。此外,没有任何对象级的人类注释和微调,Loupe可以在对象检测和视觉接地方面实现竞争性能。更重要的是,Loupe从大规模的原始图像 - 文本对学习细颗粒语义的新方向。这项工作的存储库位于https://github.com/yyjmjc/loupe。
Large-scale vision-language pre-training has shown impressive advances in a wide range of downstream tasks. Existing methods mainly model the cross-modal alignment by the similarity of the global representations of images and texts, or advanced cross-modal attention upon image and text features. However, they fail to explicitly learn the fine-grained semantic alignment between visual regions and textual phrases, as only global image-text alignment information is available. In this paper, we introduce LOUPE, a fine-grained semantically aLigned visiOn-langUage PrE-training framework, which learns fine-grained semantic alignment from the novel perspective of game-theoretic interactions. To efficiently compute the game-theoretic interactions, we further propose an uncertainty-aware neural Shapley interaction learning module. Experiments show that LOUPE achieves state-of-the-art performance on a variety of vision-language tasks. Furthermore, without any object-level human annotations and fine-tuning, LOUPE achieves competitive performance on object detection and visual grounding. More importantly, LOUPE opens a new promising direction of learning fine-grained semantics from large-scale raw image-text pairs. The repository of this work is at https://github.com/YYJMJC/LOUPE.