Multi-capture 3D camera calibration for data fusion in autonomous driving

View/Open
Document typeMaster thesis
Date2020-09-09
Rights accessOpen Access
All rights reserved. This work is protected by the corresponding intellectual and industrial
property rights. Without prejudice to any existing legal exemptions, reproduction, distribution, public
communication or transformation of this work are prohibited without permission of the copyright holder
Abstract
LiDAR (Light Detection and Ranging) is a device using a laser source and a photodetector to determine the distance to any object a headin a scene. The LiDAR uses the “time-of-flight” (TOF) as the measuring principle. Solid-state LiDAR systems features fast acquisition rates of a scene we want to measure by scanning it using a single micro-electrical-mechanical system (MEMS) mirrorbutscanning presents a certain amount of distortion in the measures which cannot be described by the conventional optical model. Thus, the scanning Field-of-View(FOV) must be characterized and corrected sothatprecise and accurate measurements can be obtained. Such correction is critical for high-endapplications,such as autonomous driving, where congruent data fusion with imaging sensors is required. The current procedure is based on a single 3D capture, wherethe LiDAR isconstrained to a fixed position and orientation. The aim of this thesisis to improve the procedure by using multiple captures with different positions and orientations of the system and adapting the conventional calibration methods used in conventional imagersusing some image processing techniques.
Collections
Files | Description | Size | Format | View |
---|---|---|---|---|
TFM_Antoni_Ramirez.pdf | 1,561Mb | View/Open |