One shot learning for generic instance segmentation in RGBD videos
Visualitza/Obre
Cita com:
hdl:2117/165543
Tipus de documentText en actes de congrés
Data publicació2019
EditorScitepress
Condicions d'accésAccés obert
Llevat que s'hi indiqui el contrari, els
continguts d'aquesta obra estan subjectes a la llicència de Creative Commons
:
Reconeixement-NoComercial-SenseObraDerivada 3.0 Espanya
Abstract
Hand-crafted features employed in classical generic instance segmentation methods have limited discriminative power to distinguish different objects in the scene, while Convolutional Neural Networks (CNNs) based semantic segmentation is restricted to predefined semantics and not aware of object instances. In this paper, we combine the advantages of the two methodologies and apply the combined approach to solve a generic instance segmentation problem in RGBD video sequences. In practice, a classical generic instance segmentation method is employed to initially detect object instances and build temporal correspondences, whereas instance models are trained based on the few detected instance samples via CNNs to generate robust features for instance segmentation. We exploit the idea of one shot learning to deal with the small training sample size problem when training CNNs. Experiment results illustrate the promising performance of the proposed approach.
CitacióLin, X.; Casas, J.; Pardas, M. One shot learning for generic instance segmentation in RGBD videos. A: International Conference on Computer Vision Theory and Applications. "Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications". Setúbal: Scitepress, 2019, p. 233-239.
ISBN978-989-758-354-4
Fitxers | Descripció | Mida | Format | Visualitza |
---|---|---|---|---|
cLin19.pdf | 1,635Mb | Visualitza/Obre |