Visual summary of egocentric photostreams by representative keyframes
Document typeConference lecture
PublisherInstitute of Electrical and Electronics Engineers (IEEE)
Rights accessOpen Access
Building a visual summary from an egocentric photostream captured by a lifelogging wearable camera is of high interest for different applications (e.g. memory reinforcement). In this paper, we propose a new summarization method based on keyframes selection that uses visual features extracted by means of a convolutional neural network. Our method applies an unsupervised clustering for dividing the photostreams into events, and finally extracts the most relevant keyframe for each event. We assess the results by applying a blind-taste test on a group of 20 people who assessed the quality of the summaries.
CitationBolaños, M., Mestre, R., Talavera, E., Giro, X., Radeva, P. Visual summary of egocentric photostreams by representative keyframes. A: International Workshop on Wearable and Ego-vision Systems for Augmented Experience. "Multimedia & Expo Workshops (ICMEW), 2015 IEEE International Conference on". Torino: Institute of Electrical and Electronics Engineers (IEEE), 2015.