Metric learning from poses for temporal clustering of human motion
Document typeConference lecture
Rights accessRestricted access - publisher's policy
Temporal clustering of human motion into semantically meaningful behaviors is a challenging task. While unsupervised methods do well to some extent, the obtained clusters often lack a semantic interpretation. In this paper, we propose to learn what makes a sequence of human poses different from others such that it should be annotated as an action. To this end, we formulate the problem as weakly supervised temporal clustering for an unknown number of clusters. Weak supervision is attained by learning a metric from the implicit semantic distances derived from already annotated databases. Such a metric contains some low-level semantic information that can be used to effectively segment a human motion sequence into distinct actions or behaviors. The main advantage of our approach is that metrics can be successfully used across datasets, making our method a compelling alternative to unsupervised methods. Experiments on publicly available mocap datasets show the effectiveness of our approach.
CitationLopez, A. [et al.]. Metric learning from poses for temporal clustering of human motion. A: British Machine Vision Conference. "Proceedings of the British Machine Vision Conference". Surrey: 2012, p. 49.1-49.12.