Rights accessRestricted access - publisher's policy
We propose a view-invariant representation of human appearance
in multi-view scenarios consisting in a new set of views that overcome
the view-dependency and moderate occlusion problems of
fixed cameras. First, a 3D reconstruction of the scene is generated,
from which we can track multiple persons in the scenario. For each
tracked subject, we define a set of virtual views by projecting its associated
3D volume. The synthetic views can be generated in convenient
directions to detect and classify a number of gestures useful
in assistive and smart environments. Experimental results of the representation and event detection in a multi-camera environment prove the effectiveness of the proposed method.
CitationLópez, A.; Canton, C.; Casas, J. Virtual view appearance representation for human motion analysis in multi-view environments. A: European Signal Processing Conference. "European Signal Processing Conference (EUSIPCO 2010)". Aalborg: 2010, p. 959-963.
All rights reserved. This work is protected by the corresponding intellectual and industrial property rights. Without prejudice to any existing legal exemptions, reproduction, distribution, public communication or transformation of this work are prohibited without permission of the copyright holder. If you wish to make any use of the work not provided for in the law, please contact: firstname.lastname@example.org