Spatio-temporal alignment and hyperspherical radon transform for 3D gait recognition in multi-view environments
View/Open
Cita com:
hdl:2117/9006
Document typeConference report
Defense date2010
Rights accessOpen Access
All rights reserved. This work is protected by the corresponding intellectual and industrial
property rights. Without prejudice to any existing legal exemptions, reproduction, distribution, public
communication or transformation of this work are prohibited without permission of the copyright holder
Abstract
This paper presents a view-invariant approach to gait recognition in multi-camera scenarios exploiting a joint spatio-temporal data representation and analysis. First,
multi-view information is employed to generate a 3D voxel reconstruction of the scene under study. The analyzed subject
is tracked and its centroid and orientation allow recentering and aligning the volume associated to it, thus obtaining a representation invariant to translation, rotation and scaling. Temporal periodicity of the walking cycle is extracted to align the input data in the time domain. Finally,
Hyperspherical Radon Transform is presented as an efficient tool to obtain features from spatio-temporal gait templates for classification purposes. Experimental results
prove the validity and robustness of the proposed method for gait recognition tasks with several covariates.
CitationCanton-Ferrer, C.; Casas, J.; Pardas, M. Spatio-temporal alignment and hyperspherical radon transform for 3D gait recognition in multi-view environments. A: IEEE Computer-Society Conference on Computer Vision and Pattern Recognition Workshops. "2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops". San Francisco: 2010, p. 116-121.
ISBN978-1-4244-7030-3/10
Publisher versionhttp://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=5544615
Files | Description | Size | Format | View |
---|---|---|---|---|
spatio-temporal.pdf | 1,770Mb | View/Open |