Mostra el registre d'ítem simple

dc.contributor.authorSuau Cuadros, Xavier
dc.contributor.authorAlcoverro Vidal, Marcel
dc.contributor.authorLópez Méndez, Adolfo
dc.contributor.authorRuiz Hidalgo, Javier
dc.contributor.authorCasas Pla, Josep Ramon
dc.contributor.otherUniversitat Politècnica de Catalunya. Departament de Teoria del Senyal i Comunicacions
dc.date.accessioned2013-03-13T16:13:16Z
dc.date.created2012
dc.date.issued2012
dc.identifier.citationSuau, X. [et al.]. INTAIRACT: Joint hand gesture and fingertip classification for touchless interaction. A: European Conference on Computer Vision. "Computer Vision – ECCV 2012. Workshops and Demonstrations". Florència: Springer, 2012, p. 602-606.
dc.identifier.isbn978-3-642-33884-7
dc.identifier.urihttp://hdl.handle.net/2117/18278
dc.description.abstractIn this demo we present intAIRact, an online hand-based touchless interaction system. Interactions are based on easy-to-learn hand gestures, that combined with translations and rotations render a user friendly and highly configurable system. The main advantage with respect to existing approaches is that we are able to robustly locate and identify fingertips. Hence, we are able to employ a simple but powerful alphabet of gestures not only by determining the number of visible fingers in a gesture, but also which fingers are being observed. To achieve such a system we propose a novel method that jointly infers hand gestures and fingertip locations using a single depth image from a consumer depth camera. Our approach is based on a novel descriptor for depth data, the Oriented Radial Distribution (ORD) [1]. On the one hand, we exploit the ORD for robust classification of hand gestures by means of efficient k-NN retrieval. On the other hand, maxima of the ORD are used to perform structured inference of fingertip locations. The proposed method outperforms other state-of-the-art approaches both in gesture recognition and fingertip localization. An implementation of the ORD extraction on a GPU yields a real-time demo running at approximately 17fps on a single laptop
dc.format.extent5 p.
dc.language.isoeng
dc.publisherSpringer
dc.rightsAttribution-NonCommercial-NoDerivs 3.0 Spain
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/3.0/es/
dc.subjectÀrees temàtiques de la UPC::Informàtica::Sistemes d'informació::Interacció home-màquina
dc.subjectÀrees temàtiques de la UPC::Enginyeria de la telecomunicació
dc.subject.lcshHuman-computer interaction
dc.titleINTAIRACT: Joint hand gesture and fingertip classification for touchless interaction
dc.typeConference lecture
dc.subject.lemacInteracció persona-ordinador
dc.contributor.groupUniversitat Politècnica de Catalunya. GPI - Grup de Processament d'Imatge i Vídeo
dc.identifier.doi10.1007/978-3-642-33885-4_62
dc.description.peerreviewedPeer Reviewed
dc.relation.publisherversionhttp://link.springer.com/chapter/10.1007/978-3-642-33885-4_62
dc.rights.accessRestricted access - publisher's policy
local.identifier.drac11030223
dc.description.versionPostprint (published version)
dc.date.lift10000-01-01
local.citation.authorSuau, X.; Alcoverro, M.; Lopez, A.; Ruiz, J.; Casas, J.
local.citation.contributorEuropean Conference on Computer Vision
local.citation.pubplaceFlorència
local.citation.publicationNameComputer Vision – ECCV 2012. Workshops and Demonstrations
local.citation.startingPage602
local.citation.endingPage606


Fitxers d'aquest items

Imatge en miniatura

Aquest ítem apareix a les col·leccions següents

Mostra el registre d'ítem simple