A real-time human-robot interaction system based on gestures for assistive scenarios
dc.contributor.author | Canal Camprodon, Gerard |
dc.contributor.author | Escalera, Sergio |
dc.contributor.author | Angulo Bahón, Cecilio |
dc.contributor.other | Universitat Politècnica de Catalunya. Departament d'Enginyeria de Sistemes, Automàtica i Informàtica Industrial |
dc.date.accessioned | 2016-11-09T09:53:11Z |
dc.date.available | 2018-08-31T00:30:14Z |
dc.date.issued | 2016 |
dc.identifier.citation | Canal, G., Escalera, S., Angulo, C. A real-time human-robot interaction system based on gestures for assistive scenarios. "Computer vision and image understanding", 2016, vol. 149, p. 65-77. |
dc.identifier.issn | 1077-3142 |
dc.identifier.uri | http://hdl.handle.net/2117/95870 |
dc.description.abstract | Natural and intuitive human interaction with robotic systems is a key point to develop robots assisting people in an easy and effective way. In this paper, a Human Robot Interaction (HRI) system able to recognize gestures usually employed in human non-verbal communication is introduced, and an in-depth study of its usability is performed. The system deals with dynamic gestures such as waving or nodding which are recognized using a Dynamic Time Warping approach based on gesture specific features computed from depth maps. A static gesture consisting in pointing at an object is also recognized. The pointed location is then estimated in order to detect candidate objects the user may refer to. When the pointed object is unclear for the robot, a disambiguation procedure by means of either a verbal or gestural dialogue is performed. This skill would lead to the robot picking an object in behalf of the user, which could present difficulties to do it by itself. The overall system — which is composed by a NAO and Wifibot robots, a KinectTM v2 sensor and two laptops — is firstly evaluated in a structured lab setup. Then, a broad set of user tests has been completed, which allows to assess correct performance in terms of recognition rates, easiness of use and response times. |
dc.format.extent | 13 p. |
dc.language.iso | eng |
dc.rights.uri | http://creativecommons.org/licenses/by-nc-nd/3.0/es/ |
dc.subject | Àrees temàtiques de la UPC::Informàtica |
dc.subject.lcsh | Human-robot interaction |
dc.subject.other | Gesture recognition |
dc.subject.other | Human Robot Interaction |
dc.subject.other | Dynamic Time Warping |
dc.subject.other | Pointing location estimation |
dc.subject.other | Recognition |
dc.subject.other | Model |
dc.title | A real-time human-robot interaction system based on gestures for assistive scenarios |
dc.type | Article |
dc.subject.lemac | Interacció persona-robot |
dc.contributor.group | Universitat Politècnica de Catalunya. ROBiri - Grup de Robòtica de l'IRI |
dc.contributor.group | Universitat Politècnica de Catalunya. GREC - Grup de Recerca en Enginyeria del Coneixement |
dc.identifier.doi | 10.1016/j.cviu.2016.03.004 |
dc.relation.publisherversion | http://www.sciencedirect.com/science/article/pii/S107731421600076X |
dc.rights.access | Open Access |
local.identifier.drac | 18736003 |
dc.description.version | Postprint (author's final draft) |
local.citation.author | Canal, G.; Escalera, S.; Angulo, C. |
local.citation.publicationName | Computer vision and image understanding |
local.citation.volume | 149 |
local.citation.startingPage | 65 |
local.citation.endingPage | 77 |
Files in this item
This item appears in the following Collection(s)
-
Articles de revista [128]
-
Articles de revista [1.028]
-
Articles de revista [86]
Except where otherwise noted, content on this work
is licensed under a Creative Commons license
:
Attribution-NonCommercial-NoDerivs 3.0 Spain