Ir al contenido (pulsa Retorno)

Universitat Politècnica de Catalunya

    • Català
    • Castellano
    • English
    • LoginRegisterLog in (no UPC users)
  • mailContact Us
  • world English 
    • Català
    • Castellano
    • English
  • userLogin   
      LoginRegisterLog in (no UPC users)

UPCommons. Global access to UPC knowledge

Banner header
69.341 UPC E-Prints
You are here:
View Item 
  •   DSpace Home
  • E-prints
  • Grups de recerca
  • ROBiri - Grup de Percepció i Manipulació Robotitzada de l'IRI
  • Articles de revista
  • View Item
  •   DSpace Home
  • E-prints
  • Grups de recerca
  • ROBiri - Grup de Percepció i Manipulació Robotitzada de l'IRI
  • Articles de revista
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

PoseScript: linking 3D human poses and natural language

Thumbnail
View/Open
TPAMI3407570.pdf (8,611Mb)
 
10.1109/TPAMI.2024.3407570
 
  View UPCommons Usage Statistics
  LA Referencia / Recolecta stats
Includes usage data since 2022
Cita com:
hdl:2117/417743

Show full item record
Delmas, GingerMés informació
Weinzaepfel, Philippe
Lucas, Thomas
Moreno-Noguer, FrancescMés informació
Rogez, Grégory
Document typeArticle
Defense date2024
Rights accessOpen Access
All rights reserved. This work is protected by the corresponding intellectual and industrial property rights. Without prejudice to any existing legal exemptions, reproduction, distribution, public communication or transformation of this work are prohibited without permission of the copyright holder
Abstract
Natural language plays a critical role in many computer vision applications, such as image captioning, visual question answering, and cross-modal retrieval, to provide fine-grained semantic information. Unfortunately, while human pose is key to human understanding, current 3D human pose datasets lack detailed language descriptions. To address this issue, we have introduced the PoseScript dataset. This dataset pairs more than six thousand 3D human poses from AMASS with rich human-annotated descriptions of the body parts and their spatial relationships. Additionally, to increase the size of the dataset to a scale that is compatible with data-hungry learning algorithms, we have proposed an elaborate captioning process that generates automatic synthetic descriptions in natural language from given 3D keypoints. This process extracts low-level pose information, known as “ posecodes ”, using a set of simple but generic rules on the 3D keypoints. These posecodes are then combined into higher level textual descriptions using syntactic rules. With automatic annotations, the amount of available data significantly scales up (100k), making it possible to effectively pretrain deep models for finetuning on human captions. To showcase the potential of annotated poses, we present three multi-modal learning tasks that utilize the PoseScript dataset. Firstly, we develop a pipeline that maps 3D poses and textual descriptions into a joint embedding space, allowing for cross-modal retrieval of relevant poses from large-scale datasets. Secondly, we establish a baseline for a text-conditioned model generating 3D poses. Thirdly, we present a learned process for generating pose descriptions. These applications demonstrate the versatility and usefulness of annotated poses in various tasks and pave the way for future research in the field. The dataset is available at https://europe.naverlabs.com/research/computer-vision/posescript/ .
Description
© 2024 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting /republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works
CitationDelmas, G.D. [et al.]. PoseScript: linking 3D human poses and natural language. "IEEE transactions on pattern analysis and machine intelligence", 2024, p. 1-13. 
URIhttp://hdl.handle.net/2117/417743
DOI10.1109/TPAMI.2024.3407570
ISSN0162-8828
Publisher versionhttps://ieeexplore.ieee.org/document/10542395
Collections
  • ROBiri - Grup de Percepció i Manipulació Robotitzada de l'IRI - Articles de revista [178]
  • Doctorat en Automàtica, Robòtica i Visió - Articles de revista [199]
  View UPCommons Usage Statistics

Show full item record

FilesDescriptionSizeFormatView
TPAMI3407570.pdf8,611MbPDFView/Open

Browse

This CollectionBy Issue DateAuthorsOther contributionsTitlesSubjectsThis repositoryCommunities & CollectionsBy Issue DateAuthorsOther contributionsTitlesSubjects

© UPC Obrir en finestra nova . Servei de Biblioteques, Publicacions i Arxius

info.biblioteques@upc.edu

  • About This Repository
  • Metadata under:Metadata under CC0
  • Contact Us
  • Send Feedback
  • Privacy Settings
  • Inici de la pàgina