2D–3D geometric fusion network using multi-neighbourhood graph convolution for RGB-D indoor scene classification
View/Open
Cita com:
hdl:2117/346435
Document typeArticle
Defense date2021-12
PublisherElsevier
Rights accessOpen Access
All rights reserved. This work is protected by the corresponding intellectual and industrial
property rights. Without prejudice to any existing legal exemptions, reproduction, distribution, public
communication or transformation of this work are prohibited without permission of the copyright holder
Abstract
Multi-modal fusion has been proved to help enhance the performance of scene classification tasks. This paper presents a 2D-3D Fusion stage that combines 3D Geometric Features with 2D Texture Features obtained by 2D Convolutional Neural Networks. To get a robust 3D Geometric embedding, a network that uses two novel layers is proposed. The first layer, Multi-Neighbourhood Graph Convolution, aims to learn a more robust geometric descriptor of the scene combining two different neighbourhoods: one in the Euclidean space and the other in the Feature space. The second proposed layer, Nearest Voxel Pooling, improves the performance of the well-known Voxel Pooling. Experimental results, using NYU-Depth-V2 and SUN RGB-D datasets, show that the proposed method outperforms the current state-of-the-art in RGB-D indoor scene classification task.
CitationMosella, A.; Ruiz-Hidalgo, J. 2D–3D geometric fusion network using multi-neighbourhood graph convolution for RGB-D indoor scene classification. "Information fusion", Desembre 2021, vol. 76, p. 46-54.
ISSN1566-2535
Publisher versionhttps://www.sciencedirect.com/science/article/pii/S1566253521001032
Files | Description | Size | Format | View |
---|---|---|---|---|
2021IF_MUNEGC_ACCEPTED_.pdf | Artículo Principal | 771,8Kb | View/Open |