论文标题
2d-3d几何融合网络,使用多个纽布时图卷积用于RGB-D室内场景分类
2D-3D Geometric Fusion Network using Multi-Neighbourhood Graph Convolution for RGB-D Indoor Scene Classification
论文作者
论文摘要
事实证明,多模式融合有助于提高场景分类任务的性能。本文提出了一个2d-3d融合阶段,该阶段将3D几何特征与2D卷积神经网络获得的2D纹理特征结合在一起。为了获得强大的3D几何嵌入,提出了使用两种新层的网络。第一层的多个纽布时图卷积旨在学习场景的更强大的几何描述符,结合了两个不同的社区:一个在欧几里得空间中,另一个在特征空间中。第二个提出的层,最近的体素池,改善了众所周知的体素池的性能。使用NYU-DEPTH-V2和SUN RGB-D数据集的实验结果表明,所提出的方法的表现优于RGB-D室内场景分类任务中最新的最新方法。
Multi-modal fusion has been proved to help enhance the performance of scene classification tasks. This paper presents a 2D-3D Fusion stage that combines 3D Geometric Features with 2D Texture Features obtained by 2D Convolutional Neural Networks. To get a robust 3D Geometric embedding, a network that uses two novel layers is proposed. The first layer, Multi-Neighbourhood Graph Convolution, aims to learn a more robust geometric descriptor of the scene combining two different neighbourhoods: one in the Euclidean space and the other in the Feature space. The second proposed layer, Nearest Voxel Pooling, improves the performance of the well-known Voxel Pooling. Experimental results, using NYU-Depth-V2 and SUN RGB-D datasets, show that the proposed method outperforms the current state-of-the-art in RGB-D indoor scene classification task.