Multi camera people tracking system in operation rooms
Visualitza/Obre
Estadístiques de LA Referencia / Recolecta
Inclou dades d'ús des de 2022
Cita com:
hdl:2117/368615
Tipus de documentProjecte Final de Màster Oficial
Data2022-05-26
Condicions d'accésAccés obert
Llevat que s'hi indiqui el contrari, els
continguts d'aquesta obra estan subjectes a la llicència de Creative Commons
:
Reconeixement-NoComercial-SenseObraDerivada 3.0 Espanya
Abstract
This project is focused on improving a Multi Camera People Tracking System that responds to the need to track doctors in a operation room to use their coordinates with a radiation model and compute the uptake they are receiving. Some surgical operations require the use of radiation to be performed such as X-Ray. The radiation exposure is defined and controlled by the machine specifications for the patients. However, the clinical staff, for reflection, receives radiation and performs everyday operations so that there is a need to know how much exposed they are. The project PODIUM (Personal Online DosImetry Using computational Methods) was created to open a new way to control the radiation without the use of dosimeters and the Multi Camera People Tracking System is the part responsible of capturing the position of the people in the room. This thesis aims to improve the Multi Camera system by increasing the recording frame ratio, add more flexibility and robustness in the calibration process between cameras, introduce a new pattern format for the world calibration and present a new graphical option for plotting the bodies. The first objective was accomplished by reducing the time delay for the communication client-server. Some problems were encountered due to the printing strings and processing data while working in a higher frequencies so some were deactivated or reduce and the code cleaned and better enclosed. Furthermore, the study showed that the computer specifications were not determinant but the number of Kinect cameras was. The second objective consisted in changing the method of calibration from capturing five frames to using a video. The challenges faced were related to filtering and proving that the resulting calibration was correct. To do so, some filters were added and a new test procedure to ensure the calibration quality was created. The validation of the new method has been studied by taking the error mean of the test for different number of frames used and also comparing the transformation matrices calculated with another using a different method. The third objective of adding a new pattern was successfully accomplished for ArUco. This new method for world calibration allowed to work with longer distances because the relation size of 3:1 using QR or ArUco for the same distance was proven. The fourth and last objective was introducing a new online display for the fused information coming from the Kinects. However, the new graphical interface was used and worked, it could not be implemented online. The main difficulty was the fact that the programming languages used were different, from C# to Python, then a required new socket was needed and it was out of the scope of the project. This study added more utility and reliability to the original project and opened a new window to improvements specially related to the graphical interface but also for other works that might use multi-Kinect systems and 3-D positioning
Fitxers | Descripció | Mida | Format | Visualitza |
---|---|---|---|---|
tfm-lbm.pdf | 57,35Mb | Visualitza/Obre |