ViTS: Video tagging system from massive web multimedia collections
Document typeConference report
Rights accessOpen Access
The popularization of multimedia content on the Web has arised the need to automatically understand, index and retrieve it. In this paper we present ViTS, an automatic Video Tagging System which learns from videos, their web context and comments shared on social networks. ViTS analyses massive multimedia collections by Internet crawling, and maintains a knowledge base that updates in real time with no need of human supervision. As a result, each video is indexed with a rich set of labels and linked with other related contents. ViTS is an industrial product under exploitation with a vocabulary of over 2.5M concepts, capable of indexing more than 150k videos per month. We compare the quality and completeness of our tags with respect to the ones in the YouTube-8M dataset, and we show how ViTS enhances the semantic annotation of the videos with a larger number of labels (10.04 tags/video), with an accuracy of 80,87%.
CitationFernàndez, D., Varas, D., Espadaler, J., Masuda, I., Ferreira, J., Woodward, A., Rodríguez, D., Giro, X., Riveiro, J. C., Bou, E. ViTS: Video tagging system from massive web multimedia collections. A: Workshop on Web-Scale Vision and Social Media. "Proceedings of the 5th Workshop on Web-scale Vision and Social Media (VSM)". Venice: IEEE Press, 2017, p. 337-346.