Mostra el registre d'ítem simple

dc.contributorVan Wunnik, Lucas Philippe
dc.contributorWalter, Florian
dc.contributorWalter, Florian
dc.contributor.authorSalcedo Bosch, Martí
dc.contributor.otherUniversitat Politècnica de Catalunya. Departament d'Organització d'Empreses
dc.date.accessioned2020-04-27T10:56:34Z
dc.date.available2020-04-27T10:56:34Z
dc.date.issued2020-07-01
dc.identifier.urihttp://hdl.handle.net/2117/185242
dc.description.abstractThe field of reinforcement learning, developed during the nineteen-eighties and nineties, is a branch of machine learning which has consistently shown wide potential. Using this theory, it is possible to design computer programs able to learn which actions must be taken, in a given environment, to maximise a cumulative reward function. In other words, by rewarding the program, it is able to learn how to behave in order to solve a problem. Originally this field was mainly applied to discrete and finite environments, however, it was possible to handle continuous environments using traditional function approximators. Recently the field has experienced a revolution, with the increase of the computational capacity, which enabled the use of artificial neural networks as function approximators. It has shown surprising results previously thought unfeasible and the number of fields where it may be applied has drastically increased. Robotics is one of them and in the past few years the achieved results have been very promising. In general, and in robotics, one of the topics still to be deeply explored is the learning distribution. This distribution means to parallelise the learning, in other words, to have many workers facing the problem and sharing information instead of one isolated worker. With it, the learning can be optimised; involving shorter learning times and better knowledge of the environment among many other advantages. To contribute to this topic, in this project three different distributed architectures, based on the state-ofthe-art algorithms, will be designed and implemented. The learning will be distributed using many simulated robotic arms, that will work in parallel performing the same task.
dc.language.isoeng
dc.publisherUniversitat Politècnica de Catalunya
dc.subjectÀrees temàtiques de la UPC::Informàtica
dc.subject.lcshNeural networks (Computer science)
dc.subject.lcshRobotics
dc.titleData integration strategies for distributed reinforcement learning in robotics
dc.title.alternativeDaten integrations strategien für verteiltes verstärkendes lernen in der robotik
dc.typeMaster thesis
dc.subject.lemacXarxes neuronals (Informàtica)
dc.subject.lemacRobòtica
dc.identifier.slugETSEIB-240.136983
dc.rights.accessOpen Access
dc.date.updated2020-01-20T10:21:09Z
dc.audience.educationlevelMàster
dc.audience.mediatorEscola Tècnica Superior d'Enginyeria Industrial de Barcelona
dc.audience.degreeMÀSTER UNIVERSITARI EN ENGINYERIA INDUSTRIAL (Pla 2014)
dc.contributor.covenanteeTechnische Universität München
dc.description.mobilityOutgoing


Fitxers d'aquest items

Thumbnail

Aquest ítem apareix a les col·leccions següents

Mostra el registre d'ítem simple