Show simple item record

dc.contributor.authorDomínguez Vidal, José Enrique
dc.contributor.authorSanfeliu Cortés, Alberto
dc.contributor.otherUniversitat Politècnica de Catalunya. Doctorat en Automàtica, Robòtica i Visió
dc.contributor.otherUniversitat Politècnica de Catalunya. Departament d'Enginyeria de Sistemes, Automàtica i Informàtica Industrial
dc.date.accessioned2025-02-25T12:13:31Z
dc.date.available2025-02-25T12:13:31Z
dc.date.issued2024
dc.identifier.citationDominguez-Vidal, J.E.; Sanfeliu, A. Force and velocity prediction in human-robot collaborative transportation tasks through video retentive networks. A: IEEE/RSJ International Conference on Intelligent Robots and Systems. "2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)". Institute of Electrical and Electronics Engineers (IEEE), 2024, p. 9307-9313. ISBN 979-8-3503-7770-5. DOI 10.1109/IROS58592.2024.10801981 .
dc.identifier.isbn979-8-3503-7770-5
dc.identifier.urihttp://hdl.handle.net/2117/424988
dc.description© 2024 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting /republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works
dc.description.abstractIn this article, we propose a generalization of a Deep Learning State-of-the-Art architecture such as Retentive Networks so that it can accept video sequences as input. With this generalization, we design a force/velocity predictor applied to the medium-distance Human-Robot collaborative object transportation task. We achieve better results than with our previous predictor by reaching success rates in testset of up to 93.7% in predicting the force to be exerted by the human and up to 96.5% in the velocity of the human-robot pair during the next 1 s, and up to 91.0% and 95.0% respectively in real experiments. This new architecture also manages to improve inference times by up to 32.8% with different graphics cards. Finally, an ablation test allows us to detect that one of the input variables used so far, such as the position of the task goal, could be discarded allowing this goal to be chosen dynamically by the human instead of being pre-set.
dc.format.extent7 p.
dc.language.isoeng
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)
dc.subjectÀrees temàtiques de la UPC::Informàtica::Automàtica i control
dc.subject.otherPhysical human-robot interaction
dc.subject.otherObject transportation
dc.subject.otherForce prediction
dc.subject.otherHuman-in-the-loop
dc.titleForce and velocity prediction in human-robot collaborative transportation tasks through video retentive networks
dc.typeConference report
dc.contributor.groupUniversitat Politècnica de Catalunya. RAIG - Mobile Robotics and Artificial Intelligence Group
dc.contributor.groupUniversitat Politècnica de Catalunya. VIS - Visió Artificial i Sistemes Intel·ligents
dc.identifier.doi10.1109/IROS58592.2024.10801981
dc.description.peerreviewedPeer Reviewed
dc.relation.publisherversionhttps://ieeexplore.ieee.org/
dc.rights.accessOpen Access
local.identifier.drac40523572
dc.description.versionPostprint (author's final draft)
dc.relation.projectidinfo:eu-repo/grantAgreement/EC/H2020/101016906/EU/A Collaborative Paradigm for Human Workers and Multi-Robot Teams in Precision Agriculture Systems/CANOPIES
local.citation.authorDominguez-Vidal, J. E.; Sanfeliu, A.
local.citation.contributorIEEE/RSJ International Conference on Intelligent Robots and Systems
local.citation.publicationName2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
local.citation.startingPage9307
local.citation.endingPage9313


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record