Doble Màster universitari en Enginyeria Industrial i Automàtica i Robòtica (ETSEIB)Doble màsterhttp://hdl.handle.net/2117/1011622024-03-29T05:01:40Z2024-03-29T05:01:40ZRobot agnostic interface for industrial aplicationsIbañez Moreno, Alvarohttp://hdl.handle.net/2117/3941572023-09-27T16:00:22Z2023-09-27T15:56:54ZRobot agnostic interface for industrial aplications
Ibañez Moreno, Alvaro
The quick evolution of robotic arms has generated many manufacturers of robotic arms, such as Universal Robots, ABB, or Fanuc. Each manufacturer offers a unique interface to program and control their robots. This can limit companies choices when selecting a suitable robot for their industrial operations, as they will choose an interface that doesn’t require new training. For that reason, and based on the experience at UPC CIM, this project will focus on creating a common interface for robotic arms. The main objectives are to produce an interface to simulate robots from different manu- facturers, save and load data, and create a simple scripting language. By using ROS, an open-source software infrastructure to communicate between different robotic elements, and Python, the code will be created in five different modules: the launch application, obtaining information about the robot, editing files, moving the robot, and scripting actions. To test the resulting interface, first a setup sequence is performed to see the limitations of the interaction. Then, three theoretical scenarios are proposed, and a scripting sequence is created for each one: Pick and Place, Sorting, and Bin Picking. While limited in some aspects, the application performs as expected and offers the basic options to solve many robot implementations. New options for the future of robot in- teraction are open with this project, as people could also further develop this program if considered.
2023-09-27T15:56:54ZIbañez Moreno, AlvaroThe quick evolution of robotic arms has generated many manufacturers of robotic arms, such as Universal Robots, ABB, or Fanuc. Each manufacturer offers a unique interface to program and control their robots. This can limit companies choices when selecting a suitable robot for their industrial operations, as they will choose an interface that doesn’t require new training. For that reason, and based on the experience at UPC CIM, this project will focus on creating a common interface for robotic arms. The main objectives are to produce an interface to simulate robots from different manu- facturers, save and load data, and create a simple scripting language. By using ROS, an open-source software infrastructure to communicate between different robotic elements, and Python, the code will be created in five different modules: the launch application, obtaining information about the robot, editing files, moving the robot, and scripting actions. To test the resulting interface, first a setup sequence is performed to see the limitations of the interaction. Then, three theoretical scenarios are proposed, and a scripting sequence is created for each one: Pick and Place, Sorting, and Bin Picking. While limited in some aspects, the application performs as expected and offers the basic options to solve many robot implementations. New options for the future of robot in- teraction are open with this project, as people could also further develop this program if considered.Robot agnostic interface for industrial aplicationsIbañez Moreno, Alvarohttp://hdl.handle.net/2117/3941562023-09-27T16:00:20Z2023-09-27T15:53:31ZRobot agnostic interface for industrial aplications
Ibañez Moreno, Alvaro
The quick evolution of robotic arms has generated many manufacturers of robotic arms, such as Universal Robots, ABB, or Fanuc. Each manufacturer offers a unique interface to program and control their robots. This can limit companies choices when selecting a suitable robot for their industrial operations, as they will choose an interface that doesn’t require new training. For that reason, and based on the experience at UPC CIM, this project will focus on creating a common interface for robotic arms. The main objectives are to produce an interface to simulate robots from different manufacturers, save and load data, and create a simple scripting language. By using ROS, an open-source software infrastructure to communicate between different robotic elements, and Python, the code will be created in five different modules: the launch application, obtaining information about the robot, editing files, moving the robot, and scripting actions. To test the resulting interface, first a setup sequence is performed to see the limitations of the interaction. Then, three theoretical scenarios are proposed, and a scripting sequence is created for each one: Pick and Place, Sorting, and Bin Picking. While limited in some aspects, the application performs as expected and offers the basic options to solve many robot implementations. New options for the future of robot interaction are open with this project, as people could also further develop this program if considered
2023-09-27T15:53:31ZIbañez Moreno, AlvaroThe quick evolution of robotic arms has generated many manufacturers of robotic arms, such as Universal Robots, ABB, or Fanuc. Each manufacturer offers a unique interface to program and control their robots. This can limit companies choices when selecting a suitable robot for their industrial operations, as they will choose an interface that doesn’t require new training. For that reason, and based on the experience at UPC CIM, this project will focus on creating a common interface for robotic arms. The main objectives are to produce an interface to simulate robots from different manufacturers, save and load data, and create a simple scripting language. By using ROS, an open-source software infrastructure to communicate between different robotic elements, and Python, the code will be created in five different modules: the launch application, obtaining information about the robot, editing files, moving the robot, and scripting actions. To test the resulting interface, first a setup sequence is performed to see the limitations of the interaction. Then, three theoretical scenarios are proposed, and a scripting sequence is created for each one: Pick and Place, Sorting, and Bin Picking. While limited in some aspects, the application performs as expected and offers the basic options to solve many robot implementations. New options for the future of robot interaction are open with this project, as people could also further develop this program if consideredCollaborative Human-Robot Object TransportationRodríguez Linares, Nicolás Adriánhttp://hdl.handle.net/2117/3735242023-12-31T01:31:00Z2022-09-27T17:40:27ZCollaborative Human-Robot Object Transportation
Rodríguez Linares, Nicolás Adrián
2022-09-27T17:40:27ZRodríguez Linares, Nicolás AdriánCollaborative Human-Robot Object TransportationRodríguez Linares, Nicolás Adriánhttp://hdl.handle.net/2117/3735232023-12-31T01:38:29Z2022-09-27T17:40:23ZCollaborative Human-Robot Object Transportation
Rodríguez Linares, Nicolás Adrián
2022-09-27T17:40:23ZRodríguez Linares, Nicolás AdriánExploration of methods for in-hand slip detection with an event-based camera during pick-and-place motionsBhagwan Bahrunani, Alberthttp://hdl.handle.net/2117/3735132022-09-27T17:00:29Z2022-09-27T16:52:10ZExploration of methods for in-hand slip detection with an event-based camera during pick-and-place motions
Bhagwan Bahrunani, Albert
Pick-and-place motions executed by robotic arms are widely used in the industry and they need to be performed effectively and without errors, such as slips and grasp failures. Concretely, rotational slip may occur when the object is grasped away from its center of mass and may cause issues when placing it due to its change of orientation. In this thesis, this problem is tackled using an event-based camera, which is designed to trigger an input event only the change in illumination at a specific image location crosses a predefined threshold. This enables us to exclude redundant information from static parts of the scene and build systems with low latency, high dynamic range, high temporal resolution and low power consumption. The topic of slip detection in manipulation tasks using event-based cameras is novel. Only a handful of papers in the literature tackle this problem and most of them do not perform as large motions as this thesis considers, typical of pick-and-place scenarios. The main contributions of this work are the design of the data acquisition system and some exploration on data processing methods to infer properties of the scene (motion, slip, etc.) from the data acquired by the platform. In terms of the experiment setup, the event-based camera (DAVIS 346) is mounted to the robotic arm (Panda) with the designed reconfigurable camera mount, offering an external view of the contact between the object and the two-finger parallel gripper used as end-effector. With this setup some small sets of data were recorded, containing slip and non-slip cases during pick-and-place motions with different objects and backgrounds. Since this is an exploratory topic and data is therefore scarce, the approach to data processing consists of feature engineering. To this end, events are processed to investigate the usefulness of alternative representations, such as event histograms and optical flow, to detect slip. Concretely, the ratio between the events coming from the object and the whole image and the vertical absolute mean velocity of the object are considered as one-dimensional signals, which can be thresholded to determine whether a slip is happening or not. In order to discriminate the events related to the object from the background, several solutions are proposed and compared. The results show that indeed, both signals are informative for slip detection, present- ing some limitations to generalize for different objects and backgrounds. In the end, some possible solutions to the detailed limitations are proposed
2022-09-27T16:52:10ZBhagwan Bahrunani, AlbertPick-and-place motions executed by robotic arms are widely used in the industry and they need to be performed effectively and without errors, such as slips and grasp failures. Concretely, rotational slip may occur when the object is grasped away from its center of mass and may cause issues when placing it due to its change of orientation. In this thesis, this problem is tackled using an event-based camera, which is designed to trigger an input event only the change in illumination at a specific image location crosses a predefined threshold. This enables us to exclude redundant information from static parts of the scene and build systems with low latency, high dynamic range, high temporal resolution and low power consumption. The topic of slip detection in manipulation tasks using event-based cameras is novel. Only a handful of papers in the literature tackle this problem and most of them do not perform as large motions as this thesis considers, typical of pick-and-place scenarios. The main contributions of this work are the design of the data acquisition system and some exploration on data processing methods to infer properties of the scene (motion, slip, etc.) from the data acquired by the platform. In terms of the experiment setup, the event-based camera (DAVIS 346) is mounted to the robotic arm (Panda) with the designed reconfigurable camera mount, offering an external view of the contact between the object and the two-finger parallel gripper used as end-effector. With this setup some small sets of data were recorded, containing slip and non-slip cases during pick-and-place motions with different objects and backgrounds. Since this is an exploratory topic and data is therefore scarce, the approach to data processing consists of feature engineering. To this end, events are processed to investigate the usefulness of alternative representations, such as event histograms and optical flow, to detect slip. Concretely, the ratio between the events coming from the object and the whole image and the vertical absolute mean velocity of the object are considered as one-dimensional signals, which can be thresholded to determine whether a slip is happening or not. In order to discriminate the events related to the object from the background, several solutions are proposed and compared. The results show that indeed, both signals are informative for slip detection, present- ing some limitations to generalize for different objects and backgrounds. In the end, some possible solutions to the detailed limitations are proposedGrape cluster and peduncle 3D localizationTorres Rodríguez, Iván Jesúshttp://hdl.handle.net/2117/3735112024-03-03T20:47:28Z2022-09-27T16:46:58ZGrape cluster and peduncle 3D localization
Torres Rodríguez, Iván Jesús
2022-09-27T16:46:58ZTorres Rodríguez, Iván JesúsWILO : Wheel Inertial LiDAR Odometry : Multi-modal state estimation for an autonomous delivery deviceFernandez Ruiz, Carloshttp://hdl.handle.net/2117/3735102024-03-03T20:46:13Z2022-09-27T16:40:30ZWILO : Wheel Inertial LiDAR Odometry : Multi-modal state estimation for an autonomous delivery device
Fernandez Ruiz, Carlos
The last mile of a product delivery accounts for more than half of its total transportation cost. The vehicles that currently carry out these tasks are major causes of congestion and pollution in cities. Autonomous delivery devices are an environmentally friendly solution that tackle the aforementioned issues. The first challenge any autonomous vehicle needs to overcome is answering the questions: where is the robot? and where is it going? These questions are solved by estimating the localization in space through the use of the sensors available on the robot. An accurate, fast, and smooth state estimation is a fundamental prerequisite for any operation. This work parts from an existing localization framework of an autonomous delivery device that relied on the use of wheel encoders, GNSS, and a inertial measurement unit (IMU) to estimate the position and orientation of the robot in space. Significant drift from IMU bias and wheel slippage accumulated over a short time, demanding constant and accurate GNSS cor- rections to avoid collisions. Urban environments usually suffer from poor satellite coverage due to buildings blocking the signals, causing inaccurate and slow readings. This work aims at incorporating LiDAR sensors, previously unused into the robot’s existing loosely-coupled localization framework. To this end, a state-of-the-art LiDAR-inertial state estimation algorithm is integrated, alongside the wheel encoders and IMU, to improve the state estimate. The localization estimation is evaluated in three real world sites: a controlled environment in the UPC-Barcelona campus, a LiDAR degraded parking-lot in Esplugues de Llobregat, and a representative environment of a last mile delivery operation in the neighbourhoods of l’Hospitalet de Llobregat. The final localization error after travelling 500m is reduced from 90m to less than 30cm of error after incorporating the LiDAR sensor into the state estimation; drastically reducing the need of constant and accurate GNSS readings. A limitation of the LiDAR localization method chosen is that it does not report the estimation error. In a feature-poor environment any LiDAR-based state estimation method will not be able to find a unique solution. Identifying when this degradation occurs is crucial for an accurate estimation of the state. In this work we propose and test an observability metric, based on the current LiDAR scan, that evaluates the information richness of the LiDAR measurements by extracting planar features from the scan and assessing the distribution of the plane normal vectors. This observability metric is tested out in a hallway scenario, where the degradation in localization along the hallway direction is correctly identified
2022-09-27T16:40:30ZFernandez Ruiz, CarlosThe last mile of a product delivery accounts for more than half of its total transportation cost. The vehicles that currently carry out these tasks are major causes of congestion and pollution in cities. Autonomous delivery devices are an environmentally friendly solution that tackle the aforementioned issues. The first challenge any autonomous vehicle needs to overcome is answering the questions: where is the robot? and where is it going? These questions are solved by estimating the localization in space through the use of the sensors available on the robot. An accurate, fast, and smooth state estimation is a fundamental prerequisite for any operation. This work parts from an existing localization framework of an autonomous delivery device that relied on the use of wheel encoders, GNSS, and a inertial measurement unit (IMU) to estimate the position and orientation of the robot in space. Significant drift from IMU bias and wheel slippage accumulated over a short time, demanding constant and accurate GNSS cor- rections to avoid collisions. Urban environments usually suffer from poor satellite coverage due to buildings blocking the signals, causing inaccurate and slow readings. This work aims at incorporating LiDAR sensors, previously unused into the robot’s existing loosely-coupled localization framework. To this end, a state-of-the-art LiDAR-inertial state estimation algorithm is integrated, alongside the wheel encoders and IMU, to improve the state estimate. The localization estimation is evaluated in three real world sites: a controlled environment in the UPC-Barcelona campus, a LiDAR degraded parking-lot in Esplugues de Llobregat, and a representative environment of a last mile delivery operation in the neighbourhoods of l’Hospitalet de Llobregat. The final localization error after travelling 500m is reduced from 90m to less than 30cm of error after incorporating the LiDAR sensor into the state estimation; drastically reducing the need of constant and accurate GNSS readings. A limitation of the LiDAR localization method chosen is that it does not report the estimation error. In a feature-poor environment any LiDAR-based state estimation method will not be able to find a unique solution. Identifying when this degradation occurs is crucial for an accurate estimation of the state. In this work we propose and test an observability metric, based on the current LiDAR scan, that evaluates the information richness of the LiDAR measurements by extracting planar features from the scan and assessing the distribution of the plane normal vectors. This observability metric is tested out in a hallway scenario, where the degradation in localization along the hallway direction is correctly identifiedWILO : Wheel Inertial LiDAR Odometry : Multi-modal state estimation for an autonomous delivery deviceFernandez Ruiz, Carloshttp://hdl.handle.net/2117/3735092024-03-03T20:34:10Z2022-09-27T16:40:27ZWILO : Wheel Inertial LiDAR Odometry : Multi-modal state estimation for an autonomous delivery device
Fernandez Ruiz, Carlos
The last mile of a product delivery accounts for more than half of its total transportation cost. The vehicles that currently carry out these tasks are major causes of congestion and pollution in cities. Autonomous delivery devices are an environmentally friendly solution that tackle the aforementioned issues. The first challenge any autonomous vehicle needs to overcome is answering the questions: where is the robot? and where is it going? These questions are solved by estimating the localization in space through the use of the sensors available on the robot. An accurate, fast, and smooth state estimation is a fundamental prerequisite for any operation. This work parts from an existing localization framework of an autonomous delivery device that relied on the use of wheel encoders, GNSS, and a inertial measurement unit (IMU) to estimate the position and orientation of the robot in space. Significant drift from IMU bias and wheel slippage accumulated over a short time, demanding constant and accurate GNSS cor- rections to avoid collisions. Urban environments usually suffer from poor satellite coverage due to buildings blocking the signals, causing inaccurate and slow readings. This work aims at incorporating LiDAR sensors, previously unused into the robot’s existing loosely-coupled localization framework. To this end, a state-of-the-art LiDAR-inertial state estimation algorithm is integrated, alongside the wheel encoders and IMU, to improve the state estimate. The localization estimation is evaluated in three real world sites: a controlled environment in the UPC-Barcelona campus, a LiDAR degraded parking-lot in Esplugues de Llobregat, and a representative environment of a last mile delivery operation in the neighbourhoods of l’Hospitalet de Llobregat. The final localization error after travelling 500m is reduced from 90m to less than 30cm of error after incorporating the LiDAR sensor into the state estimation; drastically reducing the need of constant and accurate GNSS readings. A limitation of the LiDAR localization method chosen is that it does not report the estimation error. In a feature-poor environment any LiDAR-based state estimation method will not be able to find a unique solution. Identifying when this degradation occurs is crucial for an accurate estimation of the state. In this work we propose and test an observability metric, based on the current LiDAR scan, that evaluates the information richness of the LiDAR measurements by extracting planar features from the scan and assessing the distribution of the plane normal vectors. This observability metric is tested out in a hallway scenario, where the degradation in localization along the hallway direction is correctly identified
2022-09-27T16:40:27ZFernandez Ruiz, CarlosThe last mile of a product delivery accounts for more than half of its total transportation cost. The vehicles that currently carry out these tasks are major causes of congestion and pollution in cities. Autonomous delivery devices are an environmentally friendly solution that tackle the aforementioned issues. The first challenge any autonomous vehicle needs to overcome is answering the questions: where is the robot? and where is it going? These questions are solved by estimating the localization in space through the use of the sensors available on the robot. An accurate, fast, and smooth state estimation is a fundamental prerequisite for any operation. This work parts from an existing localization framework of an autonomous delivery device that relied on the use of wheel encoders, GNSS, and a inertial measurement unit (IMU) to estimate the position and orientation of the robot in space. Significant drift from IMU bias and wheel slippage accumulated over a short time, demanding constant and accurate GNSS cor- rections to avoid collisions. Urban environments usually suffer from poor satellite coverage due to buildings blocking the signals, causing inaccurate and slow readings. This work aims at incorporating LiDAR sensors, previously unused into the robot’s existing loosely-coupled localization framework. To this end, a state-of-the-art LiDAR-inertial state estimation algorithm is integrated, alongside the wheel encoders and IMU, to improve the state estimate. The localization estimation is evaluated in three real world sites: a controlled environment in the UPC-Barcelona campus, a LiDAR degraded parking-lot in Esplugues de Llobregat, and a representative environment of a last mile delivery operation in the neighbourhoods of l’Hospitalet de Llobregat. The final localization error after travelling 500m is reduced from 90m to less than 30cm of error after incorporating the LiDAR sensor into the state estimation; drastically reducing the need of constant and accurate GNSS readings. A limitation of the LiDAR localization method chosen is that it does not report the estimation error. In a feature-poor environment any LiDAR-based state estimation method will not be able to find a unique solution. Identifying when this degradation occurs is crucial for an accurate estimation of the state. In this work we propose and test an observability metric, based on the current LiDAR scan, that evaluates the information richness of the LiDAR measurements by extracting planar features from the scan and assessing the distribution of the plane normal vectors. This observability metric is tested out in a hallway scenario, where the degradation in localization along the hallway direction is correctly identifiedDeep Reinforcement Learning in Recommender SystemsIzquierdo Enfedaque, Héctorhttp://hdl.handle.net/2117/3734552022-09-27T09:53:38Z2022-09-23T14:18:04ZDeep Reinforcement Learning in Recommender Systems
Izquierdo Enfedaque, Héctor
Recommender Systems aim to help customers find content of their interest by presenting them suggestions they are most likely to prefer. Reinforcement Learning, a Machine Learning paradigm where agents learn by interaction which actions to perform in an environment so as to maximize a reward, can be trained to give good recommendations. One of the problems when working with Reinforcement Learning algorithms is the dimensionality explosion, especially in the observation space. On the other hand, Industrial recommender systems deal with extremely large observation spaces. New Deep Reinforcement Learning algorithms can deal with this problem, but they are mainly focused on images. A new technique has been developed able to convert raw data into images, enabling DRL algorithms to be properly applied. This project addresses this line of investigation. The contributions of the project are: (1) defining a generalization of the Markov Decision Process formulation for Recommender Systems, (2) defining a way to express the observation as an image, and (3) demonstrating the use of both concepts by addressing a particular Recommender System case through Reinforcement Learning. Results show how the trained agents offer better recommendations than the arbitrary choice. However, the system does not achieve a great performance mainly due to the lack of interactions in the dataset
2022-09-23T14:18:04ZIzquierdo Enfedaque, HéctorRecommender Systems aim to help customers find content of their interest by presenting them suggestions they are most likely to prefer. Reinforcement Learning, a Machine Learning paradigm where agents learn by interaction which actions to perform in an environment so as to maximize a reward, can be trained to give good recommendations. One of the problems when working with Reinforcement Learning algorithms is the dimensionality explosion, especially in the observation space. On the other hand, Industrial recommender systems deal with extremely large observation spaces. New Deep Reinforcement Learning algorithms can deal with this problem, but they are mainly focused on images. A new technique has been developed able to convert raw data into images, enabling DRL algorithms to be properly applied. This project addresses this line of investigation. The contributions of the project are: (1) defining a generalization of the Markov Decision Process formulation for Recommender Systems, (2) defining a way to express the observation as an image, and (3) demonstrating the use of both concepts by addressing a particular Recommender System case through Reinforcement Learning. Results show how the trained agents offer better recommendations than the arbitrary choice. However, the system does not achieve a great performance mainly due to the lack of interactions in the datasetGrape cluster and peduncle detection from RGB imagesColl Ribes, Gabrielhttp://hdl.handle.net/2117/3730362024-03-03T20:46:33Z2022-09-20T09:36:07ZGrape cluster and peduncle detection from RGB images
Coll Ribes, Gabriel
2022-09-20T09:36:07ZColl Ribes, Gabriel