Show simple item record

dc.contributor.authorMoreno-Noguer, Francesc
dc.contributor.authorSanfeliu Cortés, Alberto
dc.contributor.authorSamaras, Dimitris
dc.contributor.otherUniversitat Politècnica de Catalunya. Departament d'Enginyeria de Sistemes, Automàtica i Informàtica Industrial
dc.contributor.otherInstitut de Robòtica i Informàtica Industrial
dc.date.accessioned2009-03-13T09:25:35Z
dc.date.available2009-03-13T09:25:35Z
dc.date.created2008
dc.date.issued2008
dc.identifier.citationMoreno-Noguer, Francesc; Sanfeliu, Alberto; Samaras, Dimitris. "Dependent multiple cue integration for robust tracking". IEEE transactions on pattern analysis and machine intelligence, 2008, vol. 30, núm. 4, p. 670-685.
dc.identifier.issn0162-8828
dc.identifier.urihttp://hdl.handle.net/2117/2705
dc.description.abstractWe propose a new technique for fusing multiple cues to robustly segment an object from its background in video sequences that suffer from abrupt changes of both illumination and position of the target. Robustness is achieved by the integration of appearance and geometric object features and by their estimation using Bayesian filters, such as Kalman or particle filters. In particular, each filter estimates the state of a specific object feature, conditionally dependent on another feature estimated by a distinct filter. This dependence provides improved target representations, permitting us to segment it out from the background even in nonstationary sequences. Considering that the procedure of the Bayesian filters may be described by a "hypotheses generation-hypotheses correction" strategy, the major novelty of our methodology compared to previous approaches is that the mutual dependence between filters is considered during the feature observation, that is, into the "hypotheses-correction" stage, instead of considering it when generating the hypotheses. This proves to be much more effective in terms of accuracy and reliability. The proposed method is analytically justified and applied to develop a robust tracking system that adapts online and simultaneously the color space where the image points are represented, the color distributions, the contour of the object, and its bounding box. Results with synthetic data and real video sequences demonstrate the robustness and versatility of our method.
dc.format.extentp. 670 - 685
dc.language.isoeng
dc.publisherIEEE
dc.relation.ispartofIEEE transactions on pattern analysis and machine intelligence
dc.subjectÀrees temàtiques de la UPC::Enginyeria de la telecomunicació::Processament del senyal::Processament de la imatge i del senyal vídeo
dc.subject.lcshComputer vision
dc.subject.otherBayesian tracking
dc.subject.othermultiple cue integration
dc.titleDependent multiple cue integration for robust tracking
dc.typeArticle
dc.subject.lemacVisió per ordinador
dc.contributor.groupUniversitat Politècnica de Catalunya. VIS - Visió Artificial i Sistemes Intel.ligents
dc.description.peerreviewedPeer Reviewed
dc.subject.inspecClassificació INSPEC::Pattern recognition::Computer vision
dc.relation.publisherversionhttp://dx.doi.org/10.1109/TPAMI.2007.70727
dc.rights.accessOpen Access
dc.relation.projectidcttJ-0929
dc.relation.projectidcttE-00938
dc.relation.projectidcttV-00069


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

All rights reserved. This work is protected by the corresponding intellectual and industrial property rights. Without prejudice to any existing legal exemptions, reproduction, distribution, public communication or transformation of this work are prohibited without permission of the copyright holder