Undestanding human motions from ego-camera videos
Document typeMaster thesis
Rights accessRestricted access - confidentiality agreement
Understanding human movements and recognizing them in different categories is always challenging for many applications; from humanoid and assistive robots to medical rehabilitation. Furthermore, considering the different viewpoints of the camera makes this field a unique phenomenon. This project aims to understand and distinguish different actions from the viewpoint of egocentric camera. Firstly, in this experiment , the Blender environment has been used to build the human motion dataset; with the use of two different (small and large) datasets respectively. There are four and fifteen different actions, consisting of 5K and 120K different frames captured from human movements of different ages. Secondly, the optical flow of each scenario was calculated. Thirdly, these feature vectors have been applied to the long short term memory (LSTM) neural network architecture to classify different actions. The accuracy of the results for a small section of dataset with four actions is close to 94% and for the large dataset with fifteen actions is near 83%. This type of experiment has many applications, especially in rehabilitation and biomechanics.