Mostra el registre d'ítem simple

dc.contributor.authorAksoy, Eren Erdal
dc.contributor.authorAbramov, Alexey
dc.contributor.authorDörr, Johannes
dc.contributor.authorNing, Kejun
dc.contributor.authorDellen, Babette
dc.contributor.authorWörgötter, Florentin
dc.contributor.otherInstitut de Robòtica i Informàtica Industrial
dc.date.accessioned2011-11-22T16:44:36Z
dc.date.available2011-11-22T16:44:36Z
dc.date.issued2011-10-28
dc.identifier.citationAksoy, E.E. [et al.]. Learning the semantics of object-action relations by observation. "International journal of robotics research", 28 Octubre 2011, vol. 30, núm. 10, p. 1229-1249.
dc.identifier.issn0278-3649
dc.identifier.urihttp://hdl.handle.net/2117/14016
dc.description.abstractRecognizing manipulations performed by a human and the transfer and execution of this by a robot is a difficult problem. We address this in the current study by introducing a novel representation of the relations between objects at decisive time points during a manipulation. Thereby, we encode the essential changes in a visual scenery in a condensed way such that a robot can recognize and learn a manipulation without prior object knowledge. To achieve this we continuously track image segments in the video and construct a dynamic graph sequence. Topological transitions of those graphs occur whenever a spatial relation between some segments has changed in a discontinuous way and these moments are stored in a transition matrix called the semantic event chain (SEC). We demonstrate that these time points are highly descriptive for distinguishing between different manipulations. Employing simple sub-string search algorithms, SECs can be compared and type-similar manipulations can be recognized with high confidence. As the approach is generic, statistical learning can be used to find the archetypal SEC of a given manipulation class. The performance of the algorithm is demonstrated on a set of real videos showing hands manipulating various objects and performing different actions. In experiments with a robotic arm, we show that the SEC can be learned by observing human manipulations, transferred to a new scenario, and then reproduced by the machine.
dc.format.extent21 p.
dc.language.isoeng
dc.rightsAttribution-NonCommercial-NoDerivs 3.0 Spain
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/3.0/es/
dc.subjectÀrees temàtiques de la UPC::Enginyeria de la telecomunicació::Processament del senyal::Reconeixement de formes
dc.subjectÀrees temàtiques de la UPC::Informàtica::Automàtica i control
dc.subject.lcshPattern recognition systems
dc.subject.lcshAutomation
dc.subject.otherautomation pattern recognition
dc.titleLearning the semantics of object-action relations by observation
dc.typeArticle
dc.subject.lemacReconeixement de formes (Informàtica)
dc.subject.lemacAutomatització
dc.contributor.groupUniversitat Politècnica de Catalunya. ROBiri - Grup de Robòtica de l'IRI
dc.identifier.doi10.1177/0278364911410459
dc.description.peerreviewedPeer Reviewed
dc.subject.inspecClassificació INSPEC::Pattern recognition
dc.subject.inspecClassificació INSPEC::Automation
dc.relation.publisherversionhttp://dx.doi.org/10.1177/0278364911410459
dc.rights.accessOpen Access
local.identifier.drac8560555
dc.description.versionPreprint
dc.relation.projectidinfo:eu-repo/grantAgreement/EC/FP7/247947/EU/Gardening with a Cognitive System/GARNICS
dc.relation.projectidinfo:eu-repo/grantAgreement/EC/FP7/247947/EU/Gardening with a Cognitive System/GARNICS
local.citation.publicationNameInternational journal of robotics research
local.citation.volume30
local.citation.number10
local.citation.startingPage1229
local.citation.endingPage1249


Fitxers d'aquest items

Thumbnail

Aquest ítem apareix a les col·leccions següents

Mostra el registre d'ítem simple