论文标题
g^3:通过指南接地地理位置
G^3: Geolocation via Guidebook Grounding
论文作者
论文摘要
我们演示了语言如何改善地理位置:预测拍摄图像的位置的任务。在这里,我们从人写的指南中研究了明确的知识,这些知识描述了人类用于地理分配的显着和阶级歧视性视觉特征。我们通过指南接地提出了地理位置的任务,该任务使用了来自各种位置的Streetview图像的数据集,以及用于GeoGuessr的相关文本指南,这是一种流行的交互式地理位置游戏。我们的方法通过从指南中自动提取的线索来预测每个图像的国家。用国家 /地区的伪标签监督注意力可以取得最好的表现。我们的方法基本上优于最先进的仅图像地理位置方法,其TOP-1精度的提高了5%以上。我们的数据集和代码可以在https://github.com/g-luo/geolocation_via_guidebook_grounding找到。
We demonstrate how language can improve geolocation: the task of predicting the location where an image was taken. Here we study explicit knowledge from human-written guidebooks that describe the salient and class-discriminative visual features humans use for geolocation. We propose the task of Geolocation via Guidebook Grounding that uses a dataset of StreetView images from a diverse set of locations and an associated textual guidebook for GeoGuessr, a popular interactive geolocation game. Our approach predicts a country for each image by attending over the clues automatically extracted from the guidebook. Supervising attention with country-level pseudo labels achieves the best performance. Our approach substantially outperforms a state-of-the-art image-only geolocation method, with an improvement of over 5% in Top-1 accuracy. Our dataset and code can be found at https://github.com/g-luo/geolocation_via_guidebook_grounding.