Show simple item record

dc.contributorSalamí San Juan, Esther
dc.contributorValero García, Miguel
dc.contributor.authorMarcos Paya, Pol
dc.contributor.otherUniversitat Politècnica de Catalunya. Arquitectura de Computadors
dc.date.accessioned2024-09-17T09:32:56Z
dc.date.available2024-09-17T09:32:56Z
dc.date.issued2024-09-12
dc.identifier.urihttp://hdl.handle.net/2117/414365
dc.description.abstractThe main objective of this final degree project is to develop software that allows a drone to identify and classify objects in real time through the use of a camera. To achieve this, advanced computer vision techniques and artificial intelligence algorithms have been used, ensuring the correct integration between the developed software and the drone control system. A specific use case is to guide the movement of the drone following a route marked by objects strategically located on the ground. The methodology used to meet the objectives set out focused on the implementation of an object detection module based on the YOLO (You Only Look Once) algorithm, a convolutional neural network optimised for object detection in real time. The module was developed in Python, and its integration into the Drone Engineering Ecosystem (DEE), a drone control platform, enabled the identification of objects and subsequent decision-making by the drone. During the development process, different YOLOv8 models (v8n, v8s, v8m, v8l, v8x) were selected and evaluated, and then retrained using a proprietary dataset that included classes such as banana, ball, box and backpack. Several tests were performed, both in simulated environments and in a laboratory with a real drone, to measure the accuracy and efficiency of the system. The results were satisfactory, achieving an improvement in object detection compared to pre-trained models, with accuracy increases of up to 53% in some cases. Despite the achievements, the project had limitations, such as the impossibility of implementing object detection on the drone's Raspberry Pi due to technical problems with the library used, which restricted image processing to the ground equipment. In addition, the resolution of the drone's camera was not optimal for detecting small objects, and some false positives were observed that diverted the drone from its route at times. In conclusion, the project demonstrated the effectiveness of integrating an advanced object detection system into the DEE, opening the door to future improvements in model accuracy and drone functionality. Future lines of development are suggested to optimise the system, such as the reduction of false positives and the integration of processing on the Raspberry Pi.
dc.language.isospa
dc.publisherUniversitat Politècnica de Catalunya
dc.rights.urihttp://creativecommons.org/licenses/by/3.0/es/
dc.subjectÀrees temàtiques de la UPC::Aeronàutica i espai
dc.subject.lcshDrone aircraft
dc.subject.otherReconocimiento de objetos
dc.subject.otherDron
dc.subject.otherVideo streaming
dc.titleReconocimiento de objetos para el control de drones en el Drone Engineering Ecosystem
dc.typeBachelor thesis
dc.subject.lemacAvions no tripulats
dc.identifier.slugPRISMA-188528
dc.rights.accessOpen Access
dc.date.updated2024-09-17T03:35:08Z
dc.audience.educationlevelEstudis de primer/segon cicle
dc.audience.mediatorEscola d'Enginyeria de Telecomunicació i Aeroespacial de Castelldefels
dc.audience.degreeGRAU EN ENGINYERIA DE SISTEMES DE TELECOMUNICACIÓ (Pla 2009)
dc.description.sdgObjectius de Desenvolupament Sostenible::9 - Indústria, Innovació i Infraestructura
dc.description.sdgObjectius de Desenvolupament Sostenible::4 - Educació de Qualitat


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record