论文标题
VSGNET:用于使用图卷积检测人类对象相互作用的空间注意网络
VSGNet: Spatial Attention Network for Detecting Human Object Interactions Using Graph Convolutions
论文作者
论文摘要
全面的视觉理解需要检测框架,这些框架可以有效地学习和利用对象相互作用,同时单独分析对象。这是人类对象相互作用(HOI)检测任务的主要目标。特别是,对象之间的相对空间推理和结构连接是分析相互作用的必要线索,这是由提出的视觉空间 - 图形网络(VSGNET)体系结构解决的。 VSGNET从人类对象对中提取视觉特征,通过配对的空间配置来完善特征,并通过图卷积利用对之间的结构连接。使用Coco(V-Coco)和HICO-DET数据集中的动词对VSGNET的性能进行了彻底评估。实验结果表明,在V-Coco中,VSGNET的表现优于最先进的解决方案,而在HICO-DET中,VSGNET的表现优于8%或4个地图。
Comprehensive visual understanding requires detection frameworks that can effectively learn and utilize object interactions while analyzing objects individually. This is the main objective in Human-Object Interaction (HOI) detection task. In particular, relative spatial reasoning and structural connections between objects are essential cues for analyzing interactions, which is addressed by the proposed Visual-Spatial-Graph Network (VSGNet) architecture. VSGNet extracts visual features from the human-object pairs, refines the features with spatial configurations of the pair, and utilizes the structural connections between the pair via graph convolutions. The performance of VSGNet is thoroughly evaluated using the Verbs in COCO (V-COCO) and HICO-DET datasets. Experimental results indicate that VSGNet outperforms state-of-the-art solutions by 8% or 4 mAP in V-COCO and 16% or 3 mAP in HICO-DET.