论文标题
通过图位置网络零射击多视图室内本地化
Zero-Shot Multi-View Indoor Localization via Graph Location Networks
论文作者
论文摘要
室内定位是基于位置的应用程序中的一个基本问题。当前解决此问题的方法通常取决于射频技术,这不仅需要支持基础设施,还需要人类来衡量和校准信号的努力。此外,在现有方法中,所有位置的数据收集都是必不可少的,这反过来妨碍了他们的大规模部署。在本文中,我们提出了一个新型的基于神经网络的体系结构图位置网络(GLN),以执行基于基础架构的,基于多视图图像的室内定位。 GLN根据通过消息通话网络从图像提取的强大位置表示形式进行位置预测。此外,我们引入了一种新颖的零拍室内定位设置,并通过将拟议的GLN扩展到专用的零拍版本来解决该设置,该版本利用了一种新型的MAP2VEC来训练位置吸引的嵌入并在新颖的看不见的位置进行预测。我们的广泛实验表明,所提出的方法在标准设置中的表现优于最先进的方法,并且即使在零弹位设置中也达到了有希望的准确性,在零拍设置中,一半位置的数据不可用。源代码和数据集可在https://github.com/coldmanck/zero-sero-shot-indoor-localization-release上公开获得。
Indoor localization is a fundamental problem in location-based applications. Current approaches to this problem typically rely on Radio Frequency technology, which requires not only supporting infrastructures but human efforts to measure and calibrate the signal. Moreover, data collection for all locations is indispensable in existing methods, which in turn hinders their large-scale deployment. In this paper, we propose a novel neural network based architecture Graph Location Networks (GLN) to perform infrastructure-free, multi-view image based indoor localization. GLN makes location predictions based on robust location representations extracted from images through message-passing networks. Furthermore, we introduce a novel zero-shot indoor localization setting and tackle it by extending the proposed GLN to a dedicated zero-shot version, which exploits a novel mechanism Map2Vec to train location-aware embeddings and make predictions on novel unseen locations. Our extensive experiments show that the proposed approach outperforms state-of-the-art methods in the standard setting, and achieves promising accuracy even in the zero-shot setting where data for half of the locations are not available. The source code and datasets are publicly available at https://github.com/coldmanck/zero-shot-indoor-localization-release.