Ir al contenido (pulsa Retorno)

Universitat Politècnica de Catalunya

    • Català
    • Castellano
    • English
    • LoginRegisterLog in (no UPC users)
  • mailContact Us
  • world English 
    • Català
    • Castellano
    • English
  • userLogin   
      LoginRegisterLog in (no UPC users)

UPCommons. Global access to UPC knowledge

Banner header
61.611 UPC E-Prints
You are here:
View Item 
  •   DSpace Home
  • E-prints
  • Instituts de recerca
  • IRI - Institut de Robòtica i Informàtica Industrial, CSIC-UPC
  • Ponències/Comunicacions de congressos
  • View Item
  •   DSpace Home
  • E-prints
  • Instituts de recerca
  • IRI - Institut de Robòtica i Informàtica Industrial, CSIC-UPC
  • Ponències/Comunicacions de congressos
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Seeing and hearing egocentric actions: how much can we learn?

Thumbnail
View/Open
2264-Seeing-and-Hearing-Egocentric-Actions_-How-Much-Can-We-Learn_.pdf (4,958Mb)
 
10.1109/ICCVW.2019.00548
 
  View Usage Statistics
  LA Referencia / Recolecta stats
Cita com:
hdl:2117/187293

Show full item record
Cartas Ayala, Alejandro
Luque, Jordi
Radeva, Petia
Segura, Carlos
Dimiccoli, MariellaMés informació
Document typeConference report
Defense date2019
Rights accessOpen Access
Attribution-NonCommercial-NoDerivs 3.0 Spain
Except where otherwise noted, content on this work is licensed under a Creative Commons license : Attribution-NonCommercial-NoDerivs 3.0 Spain
Abstract
Our interaction with the world is an inherently multi-modal experience. However, the understanding of human-to-object interactions has historically been addressed focusing on a single modality. In particular, a limited number of works have considered to integrate the visual and audio modalities for this purpose. In this work, we propose a multimodal approach for egocentric action recognition in a kitchen environment that relies on audio and visual information. Our model combines a sparse temporal sampling strategy with a late fusion of audio, spatial,and temporal streams. Experimental results on the EPIC-Kitchens dataset show that multimodal integration leads to better performance than unimodal approaches. In particular, we achieved a5.18%improvement over the state of the art on verb classification.
Description
© 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
CitationCartas, A. [et al.]. Seeing and hearing egocentric actions: how much can we learn?. A: ICCVW - IEEE International Conference on Computer Vision Workshops. "2019 International Conference on Computer Vision ICCV 2019: proceedings: 27 October - 2 November 2019 Seoul, Korea". 2019, p. 4470-4480. 
URIhttp://hdl.handle.net/2117/187293
DOI10.1109/ICCVW.2019.00548
Publisher versionhttps://ieeexplore.ieee.org/document/9022020
Collections
  • IRI - Institut de Robòtica i Informàtica Industrial, CSIC-UPC - Ponències/Comunicacions de congressos [463]
  View Usage Statistics

Show full item record

FilesDescriptionSizeFormatView
2264-Seeing-and ... How-Much-Can-We-Learn_.pdf4,958MbPDFView/Open

Browse

This CollectionBy Issue DateAuthorsOther contributionsTitlesSubjectsThis repositoryCommunities & CollectionsBy Issue DateAuthorsOther contributionsTitlesSubjects

© UPC Obrir en finestra nova . Servei de Biblioteques, Publicacions i Arxius

info.biblioteques@upc.edu

  • About This Repository
  • Contact Us
  • Send Feedback
  • Privacy Settings
  • Inici de la pàgina