Implementing SLAM and navigation on an autonomous mobile robot for load transportation for disabled and elderly people.
Tutor / director / evaluatorBolea Monte, Yolanda
Document typeMaster thesis
Rights accessRestricted access - author's decision
This thesis fell within the project by the Borobo company located in Nice, France, to build an autonomous mobile robot for load transportation for elderly people and people with limited mobility at a limited cost. The objective of this thesis was to implement Simultaneous Localization and Mapping (SLAM) and autonomous navigation on a four-wheeled mobile robot using low cost sensors. The method used to do so was first to list some sensors with limited cost used as odometry source and obstacle detection devices. The implementation of those sensors on the robot was also explained. Then SLAM algorithm and autonomous navigation were implemented on the robot using the Robot Operating System (ROS) and some packages provided by the ROS community. Finally the performances of SLAM and autonomous navigation were assessed for different configurations of sensors and parameters. The advantages and disadvantages of each sensor and navigation methods were listed. Regarding the odometry sources, wheel odometry from rotary encoder, visual odometry from Intel RealSense Depth camera D430 and odometry from Intel RealSense Tracking camera T265 were compared. The wheel odometry of this four-wheeled differential drive robot showed significant error due to the slippage of the wheels on the ground when executing a rotational movement of the robot. This error was limited for the visual odometry from the D430 camera but the fragility of this sensor made it unusable. The most precise odometry source tested was the T265 tracking camera which in addition to providing a precise odometry thanks to its embedded Inertial Measurement Unit (IMU) and visual-SLAM processing, was relieving the main processing unit of the robot from visual computing. For obstacle detection, the RPLidar A1 lidar sensor and the Intel RealSense Depth camera D430 were assessed. On the one hand, the lidar sensor detected the obstacle in only two dimensions but with a great precision and with a 120°field of view. On the other hand, the depth camera could detect obstacles in three dimensions but with great imprecision and limited field of view of only 85.2°. The SLAM algorithm executed with the lidar was precise and showed good close loop detection whereas with the depth camera, the depth data were too noisy to be exploitable. After completing these comparisons it was stated that the best configuration of sensors for the prototype of the Borobo robotwas to use the Intel RealSense Tracking camera T265 as odometry source and the RPLidar A1 lidar as a laser scan source. However, further developmentwas required to be completed on wheel odometry using an IMU to correct the rotational error. Further tests were also advised on the Intel RealSense depth camera D435, a less fragile version of the D430 camera, for odometry source but especially for obstacle detection as it allow a low cost obstacle detection in three dimensions. Finally some improvements and parameter tuning were recommended on the navigation algorithm.