Data integration strategies for distributed reinforcement learning in robotics
Tutor / director / evaluatorVan Wunnik, Lucas Philippe
CovenanteeTechnische Universität München
Document typeMaster thesis
Rights accessOpen Access
The field of reinforcement learning, developed during the nineteen-eighties and nineties, is a branch of machine learning which has consistently shown wide potential. Using this theory, it is possible to design computer programs able to learn which actions must be taken, in a given environment, to maximise a cumulative reward function. In other words, by rewarding the program, it is able to learn how to behave in order to solve a problem. Originally this field was mainly applied to discrete and finite environments, however, it was possible to handle continuous environments using traditional function approximators. Recently the field has experienced a revolution, with the increase of the computational capacity, which enabled the use of artificial neural networks as function approximators. It has shown surprising results previously thought unfeasible and the number of fields where it may be applied has drastically increased. Robotics is one of them and in the past few years the achieved results have been very promising. In general, and in robotics, one of the topics still to be deeply explored is the learning distribution. This distribution means to parallelise the learning, in other words, to have many workers facing the problem and sharing information instead of one isolated worker. With it, the learning can be optimised; involving shorter learning times and better knowledge of the environment among many other advantages. To contribute to this topic, in this project three different distributed architectures, based on the state-ofthe-art algorithms, will be designed and implemented. The learning will be distributed using many simulated robotic arms, that will work in parallel performing the same task.