Show simple item record

dc.contributorAlberich Carramiñana, Maria
dc.contributorDimiccoli, Mariella
dc.contributor.authorTura Vecino, Biel
dc.contributor.otherUniversitat Politècnica de Catalunya. Departament de Matemàtiques
dc.date.accessioned2021-05-27T11:51:06Z
dc.date.issued2021-05
dc.identifier.urihttp://hdl.handle.net/2117/346221
dc.descriptionInstitut de Robòtica i Informàtica Industrial
dc.description.abstractRecent research has shown that, in particular domains, unsupervised learning algorithms are achieving on par, or even better performance than fully supervised algorithms, avoiding the need of human labelled data. The division of a video into events has been an active research topic through unsupervised algorithms, exploiting relations in the video itself for a temporal segmentation task. In particular, self-supervised learning has shown to be very useful learning video representations without any annotations assigned to it. This thesis proposes a self-supervised method for learning event representations of unconstrained complex activity videos. These are sequences of images with high temporal resolution and with very small visual variance between events, with a clear semantic differentiation for humans. The assumption underlying the proposed model is that a video can be represented by a graph that encodes both semantic and temporal similarity between events. Our method follows two steps: first, meaningful initial features are extracted by a spatio-temporal backbone neural network trained on a self-supervised contrastive task. Then, starting with this initial embedding, low-dimensional graph-based event representation features are iteratively learned jointly with its underlying graph structure. The main contribution in this work is to learn a function parameterized by a graph neural network that learns graph-based event feature representations by exploiting the semantic and temporal relatedness through a fully end-to-end self-supervised trainable approach. Experiments were performed in the challenging \textit{Breakfast Action Dataset} and we show that the proposed approach leads to an effective low-dimensional feature representation of the input data, suitable for the downstream task of event segmentation. Moreover, we show that the presented method, followed by a downstream clustering task, achieves on par state-of-the-art metrics on video segmentation of complex activity videos.
dc.language.isoeng
dc.publisherUniversitat Politècnica de Catalunya
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/3.0/es/
dc.subjectÀrees temàtiques de la UPC::Informàtica::Intel·ligència artificial
dc.subjectÀrees temàtiques de la UPC::Matemàtiques i estadística
dc.subject.lcshArtificial intelligence
dc.subject.otherRepresentation learning
dc.subject.otherGraph embedding
dc.subject.otherVideo segmentation
dc.subject.otherEvent representations
dc.titleLearning graph-based event representations for unconstrained video segmentation
dc.typeMaster thesis
dc.subject.lemacIntel·ligència artificial
dc.subject.amsClassificació AMS::68 Computer science::68T Artificial intelligence
dc.identifier.slugFME-2107
dc.rights.accessRestricted access - confidentiality agreement
dc.date.lift10000-01-01
dc.date.updated2021-05-27T05:26:36Z
dc.audience.educationlevelMàster
dc.audience.mediatorUniversitat Politècnica de Catalunya. Facultat de Matemàtiques i Estadística
dc.audience.degreeMÀSTER UNIVERSITARI EN MATEMÀTICA AVANÇADA I ENGINYERIA MATEMÀTICA (Pla 2010)
dc.contributor.covenanteeInstitut de Robòtica i Informàtica Industrial


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record