Exploring transformers and visual transformers for force prediction in human-robot collaborative transportation tasks
View/Open
Article (2,504Mb) (Restricted access)
Request copy
Què és aquest botó?
Aquest botó permet demanar una còpia d'un document restringit a l'autor. Es mostra quan:
- Disposem del correu electrònic de l'autor
- El document té una mida inferior a 20 Mb
- Es tracta d'un document d'accés restringit per decisió de l'autor o d'un document d'accés restringit per política de l'editorial
Cita com:
hdl:2117/414048
Document typeConference report
Defense date2024
PublisherInstitute of Electrical and Electronics Engineers (IEEE)
Rights accessRestricted access - publisher's policy
(embargoed until 2026-05-14)
All rights reserved. This work is protected by the corresponding intellectual and industrial
property rights. Without prejudice to any existing legal exemptions, reproduction, distribution, public
communication or transformation of this work are prohibited without permission of the copyright holder
Abstract
In this paper, we analyze the possibilities offered by Deep Learning State-of-the-Art architectures such as Transformers and Visual Transformers in generating a prediction of the human’s force in a Human-Robot collaborative object transportation task at a middle distance. We outperform our previous predictor by achieving a success rate of 93.8% in testset and 90.9% in real experiments with 21 volunteers predicting in both cases the force that the human will exert during the next 1 s. A modification in the architecture allows us to obtain a second output from the model with a velocity prediction, which allows us to improve the capabilities of our predictor if it is used to estimate the trajectory that the humanrobot pair will follow. An ablation test is also performed to verify the relative contribution to performance of each input.
Description
© 2024 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
CitationDominguez-Vidal, J.E.; Sanfeliu, A. Exploring transformers and visual transformers for force prediction in human-robot collaborative transportation tasks. A: IEEE International Conference on Robotics and Automation. "2024 IEEE International Conference on Robotics and Automation, 2024, Yokohama (Japan)". Institute of Electrical and Electronics Engineers (IEEE), 2024, p. 3191-3197. ISBN 979-8-3503-8457-4. DOI 10.1109/ICRA57147.2024.10611205 .
ISBN979-8-3503-8457-4
Publisher versionhttps://ieeexplore.ieee.org/document/10611205
Other identifiershttps://www.iri.upc.edu/publications/show/2884
Collections
- Departament d'Enginyeria de Sistemes, Automàtica i Informàtica Industrial - Ponències/Comunicacions de congressos [1.520]
- RAIG - Mobile Robotics and Artificial Intelligence Group - Ponències/Comunicacions de congressos [14]
- VIS - Visió Artificial i Sistemes Intel·ligents - Ponències/Comunicacions de congressos [296]
- Doctorat en Automàtica, Robòtica i Visió - Ponències/Comunicacions de congressos [180]
Files | Description | Size | Format | View |
---|---|---|---|---|
Transformers_Fo ... 24___accepted_version_.pdf | Article | 2,504Mb | Restricted access |