论文标题

Slan:自定位者辅助网络,用于交叉模式理解

SLAN: Self-Locator Aided Network for Cross-Modal Understanding

论文作者

Zhai, Jiang-Tian, Zhang, Qi, Wu, Tong, Chen, Xing-Yu, Liu, Jiang-Jiang, Ren, Bo, Cheng, Ming-Ming

论文摘要

学习视觉和语言之间的细粒度相互作用,可以使对视觉语言任务有更准确的理解。但是,根据语义一致性的文本提取关键图像区域仍然具有挑战性。大多数现有的作品要么受到冻结探测器获得的文字词法和冗余区域的限制,要么由于对稀缺接地(金)数据的严重依赖(金)对预训练探测器而无法进一步扩展。为了解决这些问题,我们提出了自定位器辅助网络(SLAN),以跨模式理解任务,而无需任何额外的黄金数据。 SLAN由一个区域过滤器和一个区域适配器组成,以定位于不同文本条件的感兴趣区域。通过汇总跨模式信息,区域过滤器选择关键区域,区域适配器通过文本指导更新其坐标。有了详细的区域对准,Slan可以很容易地将其推广到许多下游任务。它在五个跨模式理解任务上取得了相当具竞争力的结果(例如,可可图像到文本和文本到文本检索,超过先前的SOTA方法,在可可图像到文本和文本到文本检索中为85.7%和69.2%。 Slan还表明了对两个本地化任务的强烈零射击和微调的可传递性。

Learning fine-grained interplay between vision and language allows to a more accurate understanding for VisionLanguage tasks. However, it remains challenging to extract key image regions according to the texts for semantic alignments. Most existing works are either limited by textagnostic and redundant regions obtained with the frozen detectors, or failing to scale further due to its heavy reliance on scarce grounding (gold) data to pre-train detectors. To solve these problems, we propose Self-Locator Aided Network (SLAN) for cross-modal understanding tasks without any extra gold data. SLAN consists of a region filter and a region adaptor to localize regions of interest conditioned on different texts. By aggregating cross-modal information, the region filter selects key regions and the region adaptor updates their coordinates with text guidance. With detailed region-word alignments, SLAN can be easily generalized to many downstream tasks. It achieves fairly competitive results on five cross-modal understanding tasks (e.g., 85.7% and 69.2% on COCO image-to-text and text-to-image retrieval, surpassing previous SOTA methods). SLAN also demonstrates strong zero-shot and fine-tuned transferability to two localization tasks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源