Learning to safely drive using Reinforcement Learning

View/Open
Document typeMaster thesis
Date2021-04-28
Rights accessOpen Access
All rights reserved. This work is protected by the corresponding intellectual and industrial
property rights. Without prejudice to any existing legal exemptions, reproduction, distribution, public
communication or transformation of this work are prohibited without permission of the copyright holder
Abstract
The autonomous driving research area has gained popularity over the past decade, even more with the launch of the first autonomous vehicle from Tesla, Inc. Different research branches are currently being studied, and one of the most innovative is the one in the direction of the Reinforcement Learning. However, Reinforcement Learning models do not ensure doing the safest decisions due to the unknown decision making process, making it impossible to apply these research lines in the real world, since being sure of the safety of the system is what allows autonomous driving to trespass the theoretical knowledge to practice. The aim of this project is to define a Reinforcement Learning model which ensures safety and allows to have control and awareness of the decision making process given possible unsafe situations, being trained and evaluated over a driving simulator named CARLA. The model architecture is composed of a Variational Autoencoder, in charge of reducing the dimensionality of the input images given by the simulator, a Mixture Density Recurrent Neural Network, which forecasts the most probable future state, and a Soft Actor-Critic who predicts the next action of the car agent based on past experience. Moreover, a security mask is applied to modify the actor's policy given a dangerous situation. This safety mask ensures a supervised behavior in this kind of situations providing Reinforcement-Learning- based autonomous driving systems of the security they were lacking to be applied in the real world. In addition, it has been analyzed if the agent would be able to learn the safety constraints provided by the safety mask, therefore learning to safely drive. The main contributions of this project start with proving the efficiency of using the Rein- forcement Learning Soft Actor-Critic algorithm in an autonomous driving task, which has never been done before. Additionally, several reward functions were defined which outper- forms the current state of the art. Moreover, this thesis also provides an exhaustive analysis of the relevance of forecasting in a self-driving task. To conclude, this thesis proves that using security masks in Reinforcement-Learning-based autonomous driving systems is, to the best of our knowledge, the best option to avoid the uncertain actions of Reinforcement Learning agents in unsafe situations. This fact could be the first step for promoting the application of Reinforcement Learning in the real world since it ensures safe behavior.
SubjectsReinforcement learning, Automobile driving simulators, Aprenentatge per reforç, Automòbils -- Conducció -- Simuladors
DegreeMÀSTER UNIVERSITARI EN INTEL·LIGÈNCIA ARTIFICIAL (Pla 2017)
Collections
Files | Description | Size | Format | View |
---|---|---|---|---|
155960.pdf | 20,66Mb | View/Open |