Ponències/Comunicacions de congressos
http://hdl.handle.net/2117/3755
2024-03-28T16:19:01Z
2024-03-28T16:19:01Z
Improving human-robot interaction effectiveness in human-robot collaborative object transportation using force prediction
Domínguez Vidal, José Enrique
Sanfeliu Cortés, Alberto
http://hdl.handle.net/2117/401426
2024-02-08T07:20:17Z
2024-02-08T07:16:12Z
Improving human-robot interaction effectiveness in human-robot collaborative object transportation using force prediction
Domínguez Vidal, José Enrique; Sanfeliu Cortés, Alberto
In this work, we analyse the use of a prediction of the human’s force in a Human-Robot collaborative object transportation task at a middle distance. We check that this force prediction can improve multiple parameters associated with effective Human-Robot Interaction (HRI) such as percep- tion of the robot’s contribution to the task, comfort or trust in the robot in a physical Human Robot Interaction (pHRI). We present a Deep Learning model that allows to predict the force that a human will exert in the next 1 s using as inputs the force previously exerted by the human, the robot’s velocity and environment information obtained from the robot’s LiDAR. Its success rate is up to 92.3% in testset and up to 89.1% in real experiments. We demonstrate that this force prediction, in addition to being able to be used directly to detect changes in the human’s intention, can be processed to obtain an estimate of the human’s desired trajectory. We have validated this approach with a user study involving 18 volunteers.
© 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
2024-02-08T07:16:12Z
Domínguez Vidal, José Enrique
Sanfeliu Cortés, Alberto
In this work, we analyse the use of a prediction of the human’s force in a Human-Robot collaborative object transportation task at a middle distance. We check that this force prediction can improve multiple parameters associated with effective Human-Robot Interaction (HRI) such as percep- tion of the robot’s contribution to the task, comfort or trust in the robot in a physical Human Robot Interaction (pHRI). We present a Deep Learning model that allows to predict the force that a human will exert in the next 1 s using as inputs the force previously exerted by the human, the robot’s velocity and environment information obtained from the robot’s LiDAR. Its success rate is up to 92.3% in testset and up to 89.1% in real experiments. We demonstrate that this force prediction, in addition to being able to be used directly to detect changes in the human’s intention, can be processed to obtain an estimate of the human’s desired trajectory. We have validated this approach with a user study involving 18 volunteers.
Real-life experiment metrics for evaluating human-robot collaborative navigation tasks
Repiso Polo, Ely
Garrell Zulueta, Anais
Sanfeliu Cortés, Alberto
http://hdl.handle.net/2117/400853
2024-02-02T11:10:14Z
2024-02-02T11:07:10Z
Real-life experiment metrics for evaluating human-robot collaborative navigation tasks
Repiso Polo, Ely; Garrell Zulueta, Anais; Sanfeliu Cortés, Alberto
As robots move from laboratories and industries to the real world, they must develop new abilities to collaborate with humans in various aspects, including human-robot collaborative navigation (HRCN) tasks. Then, it is required to develop general methodologies to evaluate these robots’ behaviors. These methodologies should incorporate objective and subjective measurements. Objective measurements for evaluating a robot’s behavior while navigating with others can be accomplished using social distances in conjunction with task characteristics, people-robot relationships, and physical space. Additionally, the objective evaluation of the task must consider human behavior, which is influenced by changes and the structure of their environment. Subjective evaluations of robot’s behaviors can be conducted using surveys that address various aspects of robot usability. This includes people’s perceptions of their interaction during their collaborative task with the robot, focusing on aspects such as sociability, comfort, and task-intelligence. Moreover, the communicative interaction between the agents (people and robots) involved in the collaborative task should also be evaluated. Therefore, this paper presents a comprehensive methodology for objectively and subjectively evaluating HRCN tasks.
© 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
2024-02-02T11:07:10Z
Repiso Polo, Ely
Garrell Zulueta, Anais
Sanfeliu Cortés, Alberto
As robots move from laboratories and industries to the real world, they must develop new abilities to collaborate with humans in various aspects, including human-robot collaborative navigation (HRCN) tasks. Then, it is required to develop general methodologies to evaluate these robots’ behaviors. These methodologies should incorporate objective and subjective measurements. Objective measurements for evaluating a robot’s behavior while navigating with others can be accomplished using social distances in conjunction with task characteristics, people-robot relationships, and physical space. Additionally, the objective evaluation of the task must consider human behavior, which is influenced by changes and the structure of their environment. Subjective evaluations of robot’s behaviors can be conducted using surveys that address various aspects of robot usability. This includes people’s perceptions of their interaction during their collaborative task with the robot, focusing on aspects such as sociability, comfort, and task-intelligence. Moreover, the communicative interaction between the agents (people and robots) involved in the collaborative task should also be evaluated. Therefore, this paper presents a comprehensive methodology for objectively and subjectively evaluating HRCN tasks.
Inference vs. explicitness. Do we really need the perfect predictor? The human-robot collaborative object transportation case
Domínguez Vidal, José Enrique
Sanfeliu Cortés, Alberto
http://hdl.handle.net/2117/400847
2024-02-02T11:00:22Z
2024-02-02T10:53:52Z
Inference vs. explicitness. Do we really need the perfect predictor? The human-robot collaborative object transportation case
Domínguez Vidal, José Enrique; Sanfeliu Cortés, Alberto
When robots interact with humans, limitations in their internal models arise due to the uncertainty and even randomness of human behavior. This has led to attempts to predict human future actions and infer their intent. However, some authors argue for combining inference engines with communication systems that explicitly elicit human intention. This work builds on our Perception-Intention-Action (PIA) cycle, a framework that considers human intention at the same level as perception of the environment. The PIA cycle is used in a collaborative task to compare the effect on different human- robot interaction aspects of using a force predictor that infers human implicit intention versus a communication system that explicitly elicits human intention. A study with 18 volunteers shows that allowing humans to directly express themselves can achieve the same improvement as an intention predictor.
2024-02-02T10:53:52Z
Domínguez Vidal, José Enrique
Sanfeliu Cortés, Alberto
When robots interact with humans, limitations in their internal models arise due to the uncertainty and even randomness of human behavior. This has led to attempts to predict human future actions and infer their intent. However, some authors argue for combining inference engines with communication systems that explicitly elicit human intention. This work builds on our Perception-Intention-Action (PIA) cycle, a framework that considers human intention at the same level as perception of the environment. The PIA cycle is used in a collaborative task to compare the effect on different human- robot interaction aspects of using a force predictor that infers human implicit intention versus a communication system that explicitly elicits human intention. A study with 18 volunteers shows that allowing humans to directly express themselves can achieve the same improvement as an intention predictor.
Body gesture recognition to control a social mobile robot
Laplaza Galindo, Javier
Romero Martín, Rut
Sanfeliu Cortés, Alberto
Garrell Zulueta, Anais
http://hdl.handle.net/2117/399866
2024-01-19T12:40:19Z
2024-01-19T12:35:18Z
Body gesture recognition to control a social mobile robot
Laplaza Galindo, Javier; Romero Martín, Rut; Sanfeliu Cortés, Alberto; Garrell Zulueta, Anais
In this work, we propose a gesture-based language to allow humans to interact with robots using their body in a natural way. We have created a new gesture detection model using neural networks and a new dataset of humans making a collection of body gestures to train this architecture. Furthermore, we compare body gesture communication with other communication channels to demonstrate the importance of adding this knowledge to robots. The presented approach is validated in diverse simulations and real-life experiments with non-trained volunteers. This attains promising results and establishes that it is a valuable framework for social robotic applications, such as human robot collaboration or human-robot interaction.
2024-01-19T12:35:18Z
Laplaza Galindo, Javier
Romero Martín, Rut
Sanfeliu Cortés, Alberto
Garrell Zulueta, Anais
In this work, we propose a gesture-based language to allow humans to interact with robots using their body in a natural way. We have created a new gesture detection model using neural networks and a new dataset of humans making a collection of body gestures to train this architecture. Furthermore, we compare body gesture communication with other communication channels to demonstrate the importance of adding this knowledge to robots. The presented approach is validated in diverse simulations and real-life experiments with non-trained volunteers. This attains promising results and establishes that it is a valuable framework for social robotic applications, such as human robot collaboration or human-robot interaction.
Human acceptance in the Human-Robot Interaction scenario for last-mile goods delivery
Puig-Pey Clavería, Ana María
Zamora i Mestre, Joan-Lluís
Amante García, Beatriz
Moreno Sanz, Joan
Garrell Zulueta, Anais
Grau Saldes, Antoni
Bolea Monte, Yolanda
Santamaria Navarro, Àngel
Sanfeliu Cortés, Alberto
http://hdl.handle.net/2117/397515
2023-12-12T05:46:33Z
2023-12-01T10:21:48Z
Human acceptance in the Human-Robot Interaction scenario for last-mile goods delivery
Puig-Pey Clavería, Ana María; Zamora i Mestre, Joan-Lluís; Amante García, Beatriz; Moreno Sanz, Joan; Garrell Zulueta, Anais; Grau Saldes, Antoni; Bolea Monte, Yolanda; Santamaria Navarro, Àngel; Sanfeliu Cortés, Alberto
The introduction of robotic technology in an existing scenario must be analyzed from the point of view of all the human roles involved in that scenario. In the case of dealing with urban public space, the analysis must consider a large group of citizens who carry out different activities on it. The purpose of this article is to analyze the human roles and the human acceptance when the robotic technology is introduced in the last mile distribution of goods in urban areas. In this work, we start with the description of the Human-Robot Interaction (HRI) scenario for last mile goods delivery, we describe the human roles and we propose a set of relevant indicators to evaluate the human acceptance for this task. Finally, we evaluate the human acceptance through qualitative interviews and quantitative surveys. The study has been done for the peer end-users and bystanders human roles and around 100 people participated.
© 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
2023-12-01T10:21:48Z
Puig-Pey Clavería, Ana María
Zamora i Mestre, Joan-Lluís
Amante García, Beatriz
Moreno Sanz, Joan
Garrell Zulueta, Anais
Grau Saldes, Antoni
Bolea Monte, Yolanda
Santamaria Navarro, Àngel
Sanfeliu Cortés, Alberto
The introduction of robotic technology in an existing scenario must be analyzed from the point of view of all the human roles involved in that scenario. In the case of dealing with urban public space, the analysis must consider a large group of citizens who carry out different activities on it. The purpose of this article is to analyze the human roles and the human acceptance when the robotic technology is introduced in the last mile distribution of goods in urban areas. In this work, we start with the description of the Human-Robot Interaction (HRI) scenario for last mile goods delivery, we describe the human roles and we propose a set of relevant indicators to evaluate the human acceptance for this task. Finally, we evaluate the human acceptance through qualitative interviews and quantitative surveys. The study has been done for the peer end-users and bystanders human roles and around 100 people participated.
Perception-intention-action cycle as a human acceptable way for improving human-robot collaborative tasks
Domínguez Vidal, José Enrique
Rodríguez Linares, Nicolás Adrián
Sanfeliu Cortés, Alberto
http://hdl.handle.net/2117/388323
2023-11-12T01:59:31Z
2023-06-07T10:27:25Z
Perception-intention-action cycle as a human acceptable way for improving human-robot collaborative tasks
Domínguez Vidal, José Enrique; Rodríguez Linares, Nicolás Adrián; Sanfeliu Cortés, Alberto
In Human-Robot Collaboration (HRC) tasks, the classical Perception-Action cycle can not fully explain the collaborative behaviour of the human-robot pair until it is extended to Perception-Intention-Action (PIA) cycle, giving to the human's intention a key role at the same level of the robot's perception and not as a subblock of this. Although part of the human's intention can be perceived or inferred by the other agent, this is prone to misunderstandings so the true intention has to be explicitly informed in some cases to fulfill the task. Here, we explore both types of intention and we combine them with the robot's perception through the concept of Situation Awareness (SA). We validate the PIA cycle and its acceptance by the user with a preliminary experiment in an object transportation task showing that its usage can increase trust in the robot.
© ACM 2023. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in HRI '23: Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction, https://doi.org/10.1145/3568294.3580149.
2023-06-07T10:27:25Z
Domínguez Vidal, José Enrique
Rodríguez Linares, Nicolás Adrián
Sanfeliu Cortés, Alberto
In Human-Robot Collaboration (HRC) tasks, the classical Perception-Action cycle can not fully explain the collaborative behaviour of the human-robot pair until it is extended to Perception-Intention-Action (PIA) cycle, giving to the human's intention a key role at the same level of the robot's perception and not as a subblock of this. Although part of the human's intention can be perceived or inferred by the other agent, this is prone to misunderstandings so the true intention has to be explicitly informed in some cases to fulfill the task. Here, we explore both types of intention and we combine them with the robot's perception through the concept of Situation Awareness (SA). We validate the PIA cycle and its acceptance by the user with a preliminary experiment in an object transportation task showing that its usage can increase trust in the robot.
Single-view 3d body and cloth reconstruction under complex poses
Ugrinovic Kehdy, Nicolas
Pumarola Peris, Albert
Sanfeliu Cortés, Alberto
Moreno-Noguer, Francesc
http://hdl.handle.net/2117/385206
2023-09-24T03:42:32Z
2023-03-20T10:53:41Z
Single-view 3d body and cloth reconstruction under complex poses
Ugrinovic Kehdy, Nicolas; Pumarola Peris, Albert; Sanfeliu Cortés, Alberto; Moreno-Noguer, Francesc
Recent advances in 3D human shape reconstruction from single images have shown impressive results, leveraging on deep networks that model the so-called implicit function to learn the occupancy status of arbitrarily dense 3D points in space. However, while current algorithms based on this paradigm, like PiFuHD (Saito et al., 2020), are able to estimate accurate geometry of the human shape and clothes, they require high-resolution input images and are not able to capture complex body poses. Most training and evaluation is performed on 1k-resolution images of humans standing in front of the camera under neutral body poses. In this paper, we leverage publicly available data to extend existing implicit function-based models to deal with images of humans that can have arbitrary poses and self-occluded limbs. We argue that the representation power of the implicit function is not sufficient to simultaneously model details of the geometry and of the body pose. We, therefore, propose a coarse- to-fine approach in which we first learn an implicit function that maps the input image to a 3D body shape with a low level of detail, but which correctly fits the underlying human pose, despite its complexity. We then learn a displacement map, conditioned on the smoothed surface and on the input image, which encodes the high-frequency details of the clothes and body. In the experimental section, we show that this coarse-to-fine strategy represents a very good trade-off between shape detail and pose correctness, comparing favorably to the most recent state-of-the-art approaches. Our code will be made publicly available.
2023-03-20T10:53:41Z
Ugrinovic Kehdy, Nicolas
Pumarola Peris, Albert
Sanfeliu Cortés, Alberto
Moreno-Noguer, Francesc
Recent advances in 3D human shape reconstruction from single images have shown impressive results, leveraging on deep networks that model the so-called implicit function to learn the occupancy status of arbitrarily dense 3D points in space. However, while current algorithms based on this paradigm, like PiFuHD (Saito et al., 2020), are able to estimate accurate geometry of the human shape and clothes, they require high-resolution input images and are not able to capture complex body poses. Most training and evaluation is performed on 1k-resolution images of humans standing in front of the camera under neutral body poses. In this paper, we leverage publicly available data to extend existing implicit function-based models to deal with images of humans that can have arbitrary poses and self-occluded limbs. We argue that the representation power of the implicit function is not sufficient to simultaneously model details of the geometry and of the body pose. We, therefore, propose a coarse- to-fine approach in which we first learn an implicit function that maps the input image to a 3D body shape with a low level of detail, but which correctly fits the underlying human pose, despite its complexity. We then learn a displacement map, conditioned on the smoothed surface and on the input image, which encodes the high-frequency details of the clothes and body. In the experimental section, we show that this coarse-to-fine strategy represents a very good trade-off between shape detail and pose correctness, comparing favorably to the most recent state-of-the-art approaches. Our code will be made publicly available.
Classification of humans social relations within urban areas
Castro Arcusa, Oscar
Repiso Polo, Ely
Garrell Zulueta, Anais
Sanfeliu Cortés, Alberto
http://hdl.handle.net/2117/384229
2024-01-14T05:54:56Z
2023-02-27T09:11:24Z
Classification of humans social relations within urban areas
Castro Arcusa, Oscar; Repiso Polo, Ely; Garrell Zulueta, Anais; Sanfeliu Cortés, Alberto
This paper presents the design of deep learning architectures which allow to classify the social relationship existing between two people who are walking in a side-by-side formation into four possible categories --colleagues, couple, family or friendship. The models are developed using Neural Networks or Recurrent Neural Networks to achieve the classification and are trained and evaluated using a database obtained from humans walking together in an urban environment. The best achieved model accomplishes a good accuracy in the classification problem and its results enhance the outcomes from a previous study [1]. In addition, we have developed several models to classify the social interactions in two categories --“intimate" and "acquaintances", where the best model achieves a very good performance, and for a real robot this classification is enough to be able to customize its behavior to its users. Furthermore, the proposed models show their future potential to improve its efficiency and to be implemented in a real robot.
2023-02-27T09:11:24Z
Castro Arcusa, Oscar
Repiso Polo, Ely
Garrell Zulueta, Anais
Sanfeliu Cortés, Alberto
This paper presents the design of deep learning architectures which allow to classify the social relationship existing between two people who are walking in a side-by-side formation into four possible categories --colleagues, couple, family or friendship. The models are developed using Neural Networks or Recurrent Neural Networks to achieve the classification and are trained and evaluated using a database obtained from humans walking together in an urban environment. The best achieved model accomplishes a good accuracy in the classification problem and its results enhance the outcomes from a previous study [1]. In addition, we have developed several models to classify the social interactions in two categories --“intimate" and "acquaintances", where the best model achieves a very good performance, and for a real robot this classification is enough to be able to customize its behavior to its users. Furthermore, the proposed models show their future potential to improve its efficiency and to be implemented in a real robot.
Context and intention for 3D human motion prediction: experimentation and user study in handover tasks
Laplaza Galindo, Javier
Garrell Zulueta, Anais
Moreno-Noguer, Francesc
Sanfeliu Cortés, Alberto
http://hdl.handle.net/2117/383564
2023-02-16T14:10:19Z
2023-02-16T14:06:24Z
Context and intention for 3D human motion prediction: experimentation and user study in handover tasks
Laplaza Galindo, Javier; Garrell Zulueta, Anais; Moreno-Noguer, Francesc; Sanfeliu Cortés, Alberto
In this work we present a novel attention deep learning model that uses context and human intention for 3D human body motion prediction in handover human-robot tasks. This model uses a multi-head attention architecture which incorporates as inputs the human motion, the robot end effector and the position of the obstacles. The outputs of the model are the predicted motion of the human body and the predicted human intention. We use this model to analyze a handover collaborative task with a robot where the robot is able to predict the future motion of the human and use this information in it’s planner. Several experiments are performed where human volunteers fill a standard poll to rate different features, taking into account when the robot uses the prediction versus when the robot doesn’t use the prediction.
© 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
2023-02-16T14:06:24Z
Laplaza Galindo, Javier
Garrell Zulueta, Anais
Moreno-Noguer, Francesc
Sanfeliu Cortés, Alberto
In this work we present a novel attention deep learning model that uses context and human intention for 3D human body motion prediction in handover human-robot tasks. This model uses a multi-head attention architecture which incorporates as inputs the human motion, the robot end effector and the position of the obstacles. The outputs of the model are the predicted motion of the human body and the predicted human intention. We use this model to analyze a handover collaborative task with a robot where the robot is able to predict the future motion of the human and use this information in it’s planner. Several experiments are performed where human volunteers fill a standard poll to rate different features, taking into account when the robot uses the prediction versus when the robot doesn’t use the prediction.
Robot navigation anticipative strategies in deep reinforcement motion planning
Gil Viyuela, Óscar
Sanfeliu Cortés, Alberto
http://hdl.handle.net/2117/382993
2023-11-19T01:31:47Z
2023-02-14T13:30:31Z
Robot navigation anticipative strategies in deep reinforcement motion planning
Gil Viyuela, Óscar; Sanfeliu Cortés, Alberto
The navigation of robots in dynamic urban environments, re-quires elaborated anticipative strategies for the robot to avoid collisions with dynamic objects, like bicycles or pedestrians, and to be human aware. We have developed and analyzed three anticipative strategies in motion planning taking into account the future motion of the mobile objects that can move up to 18 km/h. First, we have used our hybrid policy resulting from a Deep Deterministic Policy Gradient (DDPG) training and the Social Force Model (SFM), and we have tested it in simulation in four complex map scenarios with many pedestrians. Second, we have used these anticipative strategies in real-life experiments using the hybrid motion planning method and the ROS Navigation Stack with Dynamic Windows Approach (NS-DWA). The results in simulations and real-life experiments show very good results in open environments and also in mixed scenarios with narrow spaces.
The version of record of this article, first published in ROBOT2022: Fifth Iberian Robotics Conference, is available online at Publisher’s website: http://dx.doi.org/10.1007/978-3-031-21062-4_6
2023-02-14T13:30:31Z
Gil Viyuela, Óscar
Sanfeliu Cortés, Alberto
The navigation of robots in dynamic urban environments, re-quires elaborated anticipative strategies for the robot to avoid collisions with dynamic objects, like bicycles or pedestrians, and to be human aware. We have developed and analyzed three anticipative strategies in motion planning taking into account the future motion of the mobile objects that can move up to 18 km/h. First, we have used our hybrid policy resulting from a Deep Deterministic Policy Gradient (DDPG) training and the Social Force Model (SFM), and we have tested it in simulation in four complex map scenarios with many pedestrians. Second, we have used these anticipative strategies in real-life experiments using the hybrid motion planning method and the ROS Navigation Stack with Dynamic Windows Approach (NS-DWA). The results in simulations and real-life experiments show very good results in open environments and also in mixed scenarios with narrow spaces.