Mostra el registre d'ítem simple

dc.contributor.authorRamisa Ayats, Arnau
dc.contributor.authorAlenyà Ribas, Guillem
dc.contributor.authorMoreno-Noguer, Francesc
dc.contributor.authorTorras, Carme
dc.contributor.otherInstitut de Robòtica i Informàtica Industrial
dc.date.accessioned2017-05-15T09:47:07Z
dc.date.available2017-05-15T09:47:07Z
dc.date.issued2016
dc.identifier.citationRamisa , A., Alenyà, G., Moreno-Noguer, F., Torras, C. A 3D descriptor to detect task-oriented grasping points in clothing. "Pattern recognition", 2016, vol. 60, p. 936-948.
dc.identifier.issn0031-3203
dc.identifier.urihttp://hdl.handle.net/2117/104407
dc.description© 2016. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/
dc.description.abstractManipulating textile objects with a robot is a challenging task, especially because the garment perception is difficult due to the endless configurations it can adopt, coupled with a large variety of colors and designs. Most current approaches follow a multiple re-grasp strategy, in which clothes are sequentially grasped from different points until one of them yields a recognizable configuration. In this work we propose a method that combines 3D and appearance information to directly select a suitable grasping point for the task at hand, which in our case consists of hanging a shirt or a polo shirt from a hook. Our method follows a coarse-to-fine approach in which, first, the collar of the garment is detected and, next, a grasping point on the lapel is chosen using a novel 3D descriptor. In contrast to current 3D descriptors, ours can run in real time, even when it needs to be densely computed over the input image. Our central idea is to take advantage of the structured nature of range images that most depth sensors provide and, by exploiting integral imaging, achieve speed-ups of two orders of magnitude with respect to competing approaches, while maintaining performance. This makes it especially adequate for robotic applications as we thoroughly demonstrate in the experimental section.
dc.format.extent13 p.
dc.language.isoeng
dc.rightsAttribution-NonCommercial-NoDerivs 3.0 Spain
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/3.0/es/
dc.subjectÀrees temàtiques de la UPC::Informàtica::Robòtica
dc.subject.othercomputer vision
dc.subject.other3D descriptor
dc.subject.otherRecognition
dc.subject.otherDetection
dc.subject.otherGrasping
dc.subject.otherManipulation
dc.subject.otherRobotics
dc.titleA 3D descriptor to detect task-oriented grasping points in clothing
dc.typeArticle
dc.contributor.groupUniversitat Politècnica de Catalunya. ROBiri - Grup de Robòtica de l'IRI
dc.identifier.doi10.1016/j.patcog.2016.07.003
dc.description.peerreviewedPeer Reviewed
dc.subject.inspecClassificació INSPEC::Pattern recognition::Computer vision
dc.relation.publisherversionhttp://www.sciencedirect.com/science/article/pii/S0031320316301558
dc.rights.accessOpen Access
local.identifier.drac19260098
dc.description.versionPostprint (author's final draft)
dc.relation.projectidinfo:eu-repo/grantAgreement/EC/FP7/269959/EU/Intelligent observation and execution of Actions and manipulations/INTELLACT
local.citation.authorRamisa, A.; Alenyà, G.; Moreno-Noguer, F.; Torras, C.
local.citation.publicationNamePattern recognition
local.citation.volume60
local.citation.startingPage936
local.citation.endingPage948


Fitxers d'aquest items

Thumbnail

Aquest ítem apareix a les col·leccions següents

Mostra el registre d'ítem simple