Enhancing robotic collaborative tasks through contextual human motion prediction and intention inference
View/Open
Cita com:
hdl:2117/413535
Document typeArticle
Defense date2024-07-13
Rights accessOpen Access
Except where otherwise noted, content on this work
is licensed under a Creative Commons license
:
Attribution-NonCommercial-NoDerivs 4.0 International
Abstract
Predicting human motion based on a sequence of past observations is crucial for various applications in robotics and computer vision. Currently, this problem is typically addressed by training deep learning models using some of the most well-known 3D human motion datasets widely used in the community. However, these datasets generally do not consider how humans behave and move when a robot is nearby, leading to a data distribution different from the real distribution of motion that robots will encounter when collaborating with humans. Additionally, incorporating contextual information related to the interactive task between the human and the robot, as well as information on the human willingness to collaborate with the robot, can improve not only the accuracy of the predicted sequence but also serve as a useful tool for robots to navigate through collaborative tasks successfully. In this research, we propose a deep learning architecture that predicts both 3D human body motion and human intention for collaborative tasks. The model employs a multi-head attention mechanism, taking into account human motion and task context as inputs. The resulting outputs include the predicted motion of the human body and the inferred human intention. We have validated this architecture in two different tasks: collaborative object handover and collaborative grape harvesting. While the architecture remains the same for both tasks, the inputs differ. In the handover task, the architecture considers human motion, robot end effector, and obstacle positions as inputs. Additionally, the model can be conditioned on the desired intention to tailor the output motion accordingly. To assess the performance of the collaborative handover task, we conducted a user study to evaluate human perception of the robot’s sociability, naturalness, security, and comfort. This evaluation was conducted by comparing the robot’s behavior when it utilized the prediction in its planner versus when it did not. Furthermore, we also applied the model to a collaborative grape harvesting task. By integrating human motion prediction and human intention inference, our architecture shows promising results in enhancing the capabilities of robots in collaborative scenarios. The model’s flexibility allows it to handle various tasks with different inputs, making it adaptable to real-world applications.
Description
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License.
CitationLaplaza, J.; Moreno-Noguer, F.; Sanfeliu, A. Enhancing robotic collaborative tasks through contextual human motion prediction and intention inference. "International Journal of Social Robotics", 13 Juliol 2024,
ISSN1875-4791
Publisher versionhttps://link.springer.com/article/10.1007/s12369-024-01140-2
Collections
- Departament d'Enginyeria de Sistemes, Automàtica i Informàtica Industrial - Articles de revista [1.451]
- Doctorat en Automàtica, Robòtica i Visió - Articles de revista [179]
- VIS - Visió Artificial i Sistemes Intel·ligents - Articles de revista [144]
- RAIG - Mobile Robotics and Artificial Intelligence Group - Articles de revista [12]
Files | Description | Size | Format | View |
---|---|---|---|---|
s12369-024-01140-2.pdf | 2,377Mb | View/Open |