Mostra el registre d'ítem simple

dc.contributor.authorRubio Romano, Antonio
dc.contributor.authorVillamizar Vergel, Michael Alejandro
dc.contributor.authorFerraz Colomina, Luis
dc.contributor.authorPeñate Sánchez, Adrián
dc.contributor.authorRamisa Ayats, Arnau
dc.contributor.authorSimó Serra, Edgar
dc.contributor.authorSanfeliu Cortés, Alberto
dc.contributor.authorMoreno-Noguer, Francesc
dc.contributor.otherInstitut de Robòtica i Informàtica Industrial
dc.contributor.otherUniversitat Politècnica de Catalunya. Departament d'Enginyeria de Sistemes, Automàtica i Informàtica Industrial
dc.date.accessioned2016-02-29T15:16:35Z
dc.date.available2016-02-29T15:16:35Z
dc.date.issued2015
dc.identifier.citationRubio, A., Villamizar, M.A., Ferraz, L., Peñate, A., Ramisa, A., Simo, E., Sanfeliu, A., Moreno-Noguer, F. Efficient monocular pose estimation for complex 3D models. A: IEEE International Conference on Robotics and Automation. "2015 IEEE International Conference on Robotics and Automation (ICRA 2015): Seattle, Washington, USA, 26-30 May 2015". Seattle, WA: Institute of Electrical and Electronics Engineers (IEEE), 2015, p. 1397-1402.
dc.identifier.isbn978-1-4799-6924-1
dc.identifier.urihttp://hdl.handle.net/2117/83560
dc.description.abstractWe propose a robust and efficient method to estimate the pose of a camera with respect to complex 3D textured models of the environment that can potentially contain more than 100, 000 points. To tackle this problem we follow a top down approach where we combine high-level deep network classifiers with low level geometric approaches to come up with a solution that is fast, robust and accurate. Given an input image, we initially use a pre-trained deep network to compute a rough estimation of the camera pose. This initial estimate constrains the number of 3D model points that can be seen from the camera viewpoint. We then establish 3D-to-2D correspondences between these potentially visible points of the model and the 2D detected image features. Accurate pose estimation is finally obtained from the 2D-to-3D correspondences using a novel PnP algorithm that rejects outliers without the need to use a RANSAC strategy, and which is between 10 and 100 times faster than other methods that use it. Two real experimentsdealing with very large and complex 3D models demonstrate the effectiveness of the approach.
dc.format.extent6 p.
dc.language.isoeng
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/3.0/es/
dc.subjectÀrees temàtiques de la UPC::Informàtica::Robòtica
dc.subject.othercomputer vision
dc.subject.otherpose estimation
dc.subject.othercamera pose estimation
dc.subject.otherdeep learning
dc.subject.othercomplex 3D models
dc.titleEfficient monocular pose estimation for complex 3D models
dc.typeConference report
dc.contributor.groupUniversitat Politècnica de Catalunya. VIS - Visió Artificial i Sistemes Intel·ligents
dc.contributor.groupUniversitat Politècnica de Catalunya. ROBiri - Grup de Robòtica de l'IRI
dc.identifier.doi10.1109/ICRA.2015.7139372
dc.description.peerreviewedPeer Reviewed
dc.subject.inspecClassificació INSPEC::Pattern recognition::Computer vision
dc.relation.publisherversionhttp://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7139372
dc.rights.accessOpen Access
local.identifier.drac16825572
dc.description.versionPostprint (author's final draft)
dc.relation.projectidinfo:eu-repo/grantAgreement/EC/FP7/287617/EU/Aerial Robotics Cooperative Assembly System/ARCAS
dc.relation.projectidinfo:eu-repo/grantAgreement/MINECO//DPI2013-42458-P/ES/INTERACCION, APRENDIZAJE Y COOPERACION ROBOT ? HUMANO EN AREAS URBANAS/
local.citation.authorRubio, A.; Villamizar, M.A.; Ferraz, L.; Peñate, A.; Ramisa, A.; Simo, E.; Sanfeliu, A.; Moreno-Noguer, F.
local.citation.contributorIEEE International Conference on Robotics and Automation
local.citation.pubplaceSeattle, WA
local.citation.publicationName2015 IEEE International Conference on Robotics and Automation (ICRA 2015): Seattle, Washington, USA, 26-30 May 2015
local.citation.startingPage1397
local.citation.endingPage1402


Fitxers d'aquest items

Thumbnail

Aquest ítem apareix a les col·leccions següents

Mostra el registre d'ítem simple