Interaction-GCN: a graph convolutional network based framework for social interaction recognition in egocentric videos
Document typeConference report
PublisherInstitute of Electrical and Electronics Engineers (IEEE)
Rights accessOpen Access
Except where otherwise noted, content on this work is licensed under a Creative Commons license : Attribution-NonCommercial-NoDerivs 3.0 Spain
In this paper we propose a new framework to categorize social interactions in egocentric videos, we named InteractionGCN. Our method extracts patterns of relational and non-relational cues at the frame level and uses them to build a relational graph from which the interactional context at the frame level is estimated via a Graph Convolutional Network based approach. Then it propagates this context over time, together with first-person motion information, through a Gated Recurrent Unit architecture. Ablation studies and experimental evaluation on two publicly available datasets validate the proposed approach and establish state of the art results.
© 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
CitationFelicioni, S.; Dimiccoli, M. Interaction-GCN: a graph convolutional network based framework for social interaction recognition in egocentric videos. A: IEEE International Conference on Image Processing. "Proceeding of 2021 IEEE International Conference on Image Processing (ICIP)". Institute of Electrical and Electronics Engineers (IEEE), 2021, p. 2348-2352. ISBN 978-1-6654-4115-5. DOI 10.1109/ICIP42928.2021.9506690.
|2457-Interactio ... n-in-egocentric-videos.pdf||1,429Mb||View/Open|