A dual network for super-resolution and semantic segmentation of sentinel-2 imagery
Visualitza/Obre
Cita com:
hdl:2117/360869
Tipus de documentArticle
Data publicació2021-11-12
EditorMultidisciplinary Digital Publishing Institute (MDPI)
Condicions d'accésAccés obert
Llevat que s'hi indiqui el contrari, els
continguts d'aquesta obra estan subjectes a la llicència de Creative Commons
:
Reconeixement 4.0 Internacional
ProjecteAPRENDIZAJE PROFUNDO EFICIENTE PARA SECUENCIAS DE VIDEO Y NUBES DE PUNTOS (AEI-PID2020-117142GB-I00)
Abstract
There is a growing interest in the development of automated data processing workflows that provide reliable, high spatial resolution land cover maps. However, high-resolution remote sensing images are not always affordable. Taking into account the free availability of Sentinel-2 satellite data, in this work we propose a deep learning model to generate high-resolution segmentation maps from low-resolution inputs in a multi-task approach. Our proposal is a dual-network model with two branches: the Single Image Super-Resolution branch, that reconstructs a high-resolution version of the input image, and the Semantic Segmentation Super-Resolution branch, that predicts a high-resolution segmentation map with a scaling factor of 2. We performed several experiments to find the best architecture, training and testing on a subset of the S2GLC 2017 dataset. We based our model on the DeepLabV3+ architecture, enhancing the model and achieving an improvement of 5% on IoU and almost 10% on the recall score. Furthermore, our qualitative results demonstrate the effectiveness and usefulness of the proposed approach.
CitacióAbadal, S. [et al.]. A dual network for super-resolution and semantic segmentation of sentinel-2 imagery. "Remote sensing", 12 Novembre 2021, vol. 13, article 4547, p. 1-25.
ISSN2072-4292
Versió de l'editorhttps://www.mdpi.com/2072-4292/13/22/4547
Fitxers | Descripció | Mida | Format | Visualitza |
---|---|---|---|---|
remotesensing-13-04547.pdf | 18,07Mb | Visualitza/Obre |