Show simple item record

dc.contributor.authorVillamizar Vergel, Michael Alejandro
dc.contributor.authorGarrell Zulueta, Anais
dc.contributor.authorSanfeliu Cortés, Alberto
dc.contributor.authorMoreno-Noguer, Francesc
dc.contributor.otherInstitut de Robòtica i Informàtica Industrial
dc.contributor.otherUniversitat Politècnica de Catalunya. Departament d'Enginyeria de Sistemes, Automàtica i Informàtica Industrial
dc.date.accessioned2016-05-17T14:45:15Z
dc.date.available2016-05-17T14:45:15Z
dc.date.issued2015
dc.identifier.citationVillamizar, M.A., Garrell, A., Sanfeliu, A., Moreno-Noguer, F. Modeling robot's world with minimal effort. A: IEEE International Conference on Robotics and Automation. "2015 IEEE International Conference on Robotics and Automation (ICRA 2015): Seattle, Washington, USA, 26-30 May 2015". Seattle, WA: Institute of Electrical and Electronics Engineers (IEEE), 2015, p. 4890-4896.
dc.identifier.isbn978-1-4799-6924-1
dc.identifier.urihttp://hdl.handle.net/2117/87122
dc.description© 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works
dc.description.abstractWe propose an efficient Human Robot Interaction approach to efficiently model the appearance of all relevant objects in robot’s environment. Given an input video stream recorded while the robot is navigating, the user just needs to annotate a very small number of frames to build specific classifiers for each of the objects of interest. At the core of the method, there are several random ferns classifiers that share the same features and are updated online. The resulting methodology is fast (runs at 8 fps), versatile (it can be applied to unconstrained scenarios), scalable (real experiments show we can model up to 30 different object classes), and minimizes the amount of human intervention by leveraging the uncertainty measures associated to each classifier. We thoroughly validate the approach on synthetic data and on real sequences acquired with a mobile platform in outdoor and challenging scenarios containing a multitude of different objects. We show that the human can, with minimal effort, provide the robot with a detailed model of the objects in the scene.
dc.format.extent7 p.
dc.language.isoeng
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/3.0/es/
dc.subjectÀrees temàtiques de la UPC::Informàtica::Robòtica
dc.subject.otherHuman robot interaction
dc.titleModeling robot's world with minimal effort
dc.typeConference report
dc.contributor.groupUniversitat Politècnica de Catalunya. VIS - Visió Artificial i Sistemes Intel.ligents
dc.contributor.groupUniversitat Politècnica de Catalunya. ROBiri - Grup de Robòtica de l'IRI
dc.identifier.doi10.1109/ICRA.2015.7139878
dc.description.peerreviewedPeer Reviewed
dc.subject.inspecClassificació INSPEC::Automation::Robots::Humanoid robots
dc.relation.publisherversionhttp://ieeexplore.ieee.org/document/7139878/
dc.rights.accessOpen Access
drac.iddocument16825595
dc.description.versionPostprint (author's final draft)
dc.relation.projectidinfo:eu-repo/grantAgreement/EC/FP7/287617/EU/Aerial Robotics Cooperative Assembly System/ARCAS
upcommons.citation.authorVillamizar, M.A., Garrell, A., Sanfeliu, A., Moreno-Noguer, F.
upcommons.citation.contributorIEEE International Conference on Robotics and Automation
upcommons.citation.pubplaceSeattle, WA
upcommons.citation.publishedtrue
upcommons.citation.publicationName2015 IEEE International Conference on Robotics and Automation (ICRA 2015): Seattle, Washington, USA, 26-30 May 2015
upcommons.citation.startingPage4890
upcommons.citation.endingPage4896


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

Except where otherwise noted, content on this work is licensed under a Creative Commons license: Attribution-NonCommercial-NoDerivs 3.0 Spain