Counter a drone and the performance analysis of deep reinforcement learning method and human pilot
10.1109/DASC52595.2021.9594413
Inclou dades d'ús des de 2022
Cita com:
hdl:2117/359262
Tipus de documentText en actes de congrés
Data publicació2021
EditorInstitute of Electrical and Electronics Engineers (IEEE)
Condicions d'accésAccés obert
Llevat que s'hi indiqui el contrari, els
continguts d'aquesta obra estan subjectes a la llicència de Creative Commons
:
Reconeixement-NoComercial-SenseObraDerivada 3.0 Espanya
ProjecteEngage - The SESAR Knowledge Transfer Network (EC-H2020-783287)
CORUS-XUAM - CONCEPT OF OPERATIONS FOR EUROPEAN U-SPACE SERVICES (EC-H2020-101017682)
SUPERVISION DE FLOTA DE DRONES Y OPTIMIZACION DE LOS PLANES DE VUELO DE OPERACIONES COMERCIALES (AEI-PID2020-116377RB-C21)
CORUS-XUAM - CONCEPT OF OPERATIONS FOR EUROPEAN U-SPACE SERVICES (EC-H2020-101017682)
SUPERVISION DE FLOTA DE DRONES Y OPTIMIZACION DE LOS PLANES DE VUELO DE OPERACIONES COMERCIALES (AEI-PID2020-116377RB-C21)
Abstract
Artificial Intelligence (AI) has been used in different
research areas in aerospace to create an intelligent system.
Especially, an unmanned aerial vehicle (UAV), known as a drone,
can be controlled by AI methods such as deep reinforcement
learning (DRL) in different purposes. Drones with DRL become
more intelligent and eventually they can be fully autonomous. In
this paper, DRL method supported by real time object detection
model is proposed to detect and catch a drone. Additionally,
the results are analyzed by comparing the time to catch the
target drone in seconds between DRL method, human pilot and
an algorithm which directs the drone towards the target position
without using any AI method or navigation and guidance method.
The main idea is to catch a drone in an environment as fast as
possible without crashing any obstacles inside the environment. In
DRL method, the agent is a quadcopter drone and it is rewarded
in each time step by the environment provided by Airsim flight
simulator. Drone is trained to catch the target drone by using
DRL model which is based on deep Q-Network algorithm. After
training, the tests have been made by the agent drone with DRL
model and human pilots to catch stationary and non-stationary
target drone. The training and test results show that the agent
drone learns to catch target drone which can be a stationary
and a non-stationary. In addition. the agent avoids crashing any
obstacles in the environment with a minimum success rate of
94%. Also, DRL model performance is compared with the human
pilot performances and the agent with DRL model shows better
time to catch the target drone. Human pilots struggle to control
the drone by using remote controller when catching the target in
simulation. However, the agent with DRL model is rarely missing
the target when trying to catch the target.
CitacióCetin, E.; Barrado, C.; Pastor, E. Counter a drone and the performance analysis of deep reinforcement learning method and human pilot. A: IEEE/AIAA Digital Avionics Systems Conference. "40th DASC: Digital Avionics Systems Conference, San Antonio,TX, USA: 3-7 October, 2021: conference proceedings". Institute of Electrical and Electronics Engineers (IEEE), 2021, p. 1-7. DOI 10.1109/DASC52595.2021.9594413.
Versió de l'editorhttps://ieeexplore.ieee.org/document/9594413
Col·leccions
- Doctorat en Ciència i Tecnologia Aeroespacials - Ponències/Comunicacions de congressos [71]
- ICARUS - Intelligent Communications and Avionics for Robust Unmanned Aerial Systems - Ponències/Comunicacions de congressos [171]
- Departament d'Arquitectura de Computadors - Ponències/Comunicacions de congressos [1.954]
Fitxers | Descripció | Mida | Format | Visualitza |
---|---|---|---|---|
Counter_a_Drone ... Method_and_Human_Pilot.pdf | 3,400Mb | Visualitza/Obre |