Virtual view appearance representation for human motion analysis in multi-view environments
Document typeConference report
Rights accessRestricted access - publisher's policy
We propose a view-invariant representation of human appearance in multi-view scenarios consisting in a new set of views that overcome the view-dependency and moderate occlusion problems of fixed cameras. First, a 3D reconstruction of the scene is generated, from which we can track multiple persons in the scenario. For each tracked subject, we define a set of virtual views by projecting its associated 3D volume. The synthetic views can be generated in convenient directions to detect and classify a number of gestures useful in assistive and smart environments. Experimental results of the representation and event detection in a multi-camera environment prove the effectiveness of the proposed method.
CitationLópez, A.; Canton, C.; Casas, J. Virtual view appearance representation for human motion analysis in multi-view environments. A: European Signal Processing Conference. "European Signal Processing Conference (EUSIPCO 2010)". Aalborg: 2010, p. 959-963.
|1569292721.pdf||Article EUSIPCO 2010||1.479Mb||Restricted access|