Visual control for fine grasping of deformable objects
Visualitza/Obre
Estadístiques de LA Referencia / Recolecta
Inclou dades d'ús des de 2022
Cita com:
hdl:2117/339983
Realitzat a/ambInstitut de Robòtica i Informàtica Industrial
Tipus de documentProjecte Final de Màster Oficial
Data2021-11-01
Condicions d'accésAccés obert
Llevat que s'hi indiqui el contrari, els
continguts d'aquesta obra estan subjectes a la llicència de Creative Commons
:
Reconeixement-NoComercial-SenseObraDerivada 3.0 Espanya
Abstract
This Master Thesis aims to investigate and assess different grasping strategies
to allow a robot to grasp folded textiles. In general, the difficulty of the problem
of grasping textiles is due to the variety of possible fabrics and patterns of the
garments and to the high degree of achievable deformation shapes. The former
implies complexity in perception, the latter causes difficulties in reconstructing
the 3D model and predicting the behaviour of the objects.
This project focuses on the final approach motion that enables the grasping
of the garment. Compared to the pre-grasp motion (first approach motion
of the gripper to the garment), this is a challenging part that requires more
precision from vision and more control accuracy. Moreover, the first part of the
movement has already been developed in other projects that rely on hand-eye
calibration and classical kinematics.
In particular, we are interested in providing the robot with a suitable strategy
to grasp only the top layer of a folded garment, so we need a precision from
vision that is not guaranteed by the camera on the robot head. Therefore we
embedded an endoscopic camera in the robot hand in order to have a camera
with a better point of view and mobile.
After having evaluated different techniques, like visual servoing, line detection
and visual tracking, an approach based on line detection has been developed.
This method is composed of a vision part followed by a control part. The
vision phase exploits simple segmentation techniques like Canny edge detection
and Hough transform in order to speed up the processing of the image and
consequently the entire procedure. The control phase exploits the information
coming from vision to elaborate new control messages sent to the Whole Body
Controller (WBC) of the robot. These messages contain the new position in
which we want to send the arm-tool link in order to approach carefully the
garment.
Finally we have performed evaluation experiments using a TIAGo mobile manipulation robot in the Perception and Manipulation laboratory at IRI, a laboratory that simulates an apartment. Specifically, we have shown that despite
the low precision of the WBC of the robot the closed-loop procedure designed
works correctly with various types of folded garments, with an arbitrary number of layers, laying on different surfaces and under different lighting conditions.
TitulacióMOBILITAT INCOMING
Fitxers | Descripció | Mida | Format | Visualitza |
---|---|---|---|---|
master-thesis-serenafurlan.pdf | 76,61Mb | Visualitza/Obre |