We propose robust multi-dimensional motion features for human activity recognition from first-person videos. The proposed features encode information about motion magnitude, direction and variation, and combine them with virtual inertial data generated from the video itself. The use of grid flow representation, per-frame normalization and temporal feature accumulation enhances the robustness of our new representation. Results on multiple datasets demonstrate that the proposed feature representation outperforms existing motion features, and importantly it does so independently of the classifier. Moreover, the proposed multi-dimensional motion features are general enough to make them suitable for vision tasks beyond those related to wearable cameras. (C) 2015 The Authors. Published by Elsevier Inc.
CitacióAbebe, G., Cavallaro, A., Llanas, F. Robust multi-dimensional motion features for first-person vision activity recognition. "Computer vision and image understanding", Agost 2016, vol. 149, p. 229-248.