Multi-capture 3D camera calibration for data fusion in autonomous driving
Visualitza/Obre
Estadístiques de LA Referencia / Recolecta
Inclou dades d'ús des de 2022
Cita com:
hdl:2117/342874
Tipus de documentProjecte Final de Màster Oficial
Data2020-09-09
Condicions d'accésAccés obert
Tots els drets reservats. Aquesta obra està protegida pels drets de propietat intel·lectual i
industrial corresponents. Sense perjudici de les exempcions legals existents, queda prohibida la seva
reproducció, distribució, comunicació pública o transformació sense l'autorització del titular dels drets
Abstract
LiDAR (Light Detection and Ranging) is a device using a laser source and a photodetector to determine the distance to any object a headin a scene. The LiDAR uses the “time-of-flight” (TOF) as the measuring principle. Solid-state LiDAR systems features fast acquisition rates of a scene we want to measure by scanning it using a single micro-electrical-mechanical system (MEMS) mirrorbutscanning presents a certain amount of distortion in the measures which cannot be described by the conventional optical model. Thus, the scanning Field-of-View(FOV) must be characterized and corrected sothatprecise and accurate measurements can be obtained. Such correction is critical for high-endapplications,such as autonomous driving, where congruent data fusion with imaging sensors is required. The current procedure is based on a single 3D capture, wherethe LiDAR isconstrained to a fixed position and orientation. The aim of this thesisis to improve the procedure by using multiple captures with different positions and orientations of the system and adapting the conventional calibration methods used in conventional imagersusing some image processing techniques.
Col·leccions
Fitxers | Descripció | Mida | Format | Visualitza |
---|---|---|---|---|
TFM_Antoni_Ramirez.pdf | 1,561Mb | Visualitza/Obre |