Reinforcement learning for robot control using probability density estimations
Visualitza/Obre
draccelaya.pdf (563,1Kb) (Accés restringit)
Sol·licita una còpia a l'autor
Què és aquest botó?
Aquest botó permet demanar una còpia d'un document restringit a l'autor. Es mostra quan:
- Disposem del correu electrònic de l'autor
- El document té una mida inferior a 20 Mb
- Es tracta d'un document d'accés restringit per decisió de l'autor o d'un document d'accés restringit per política de l'editorial
Estadístiques de LA Referencia / Recolecta
Inclou dades d'ús des de 2022
Cita com:
hdl:2117/10368
Tipus de documentText en actes de congrés
Data publicació2010
EditorINSTICC Press. Institute for Systems and Technologies of Information, Control and Communication
Condicions d'accésAccés restringit per política de l'editorial
Llevat que s'hi indiqui el contrari, els
continguts d'aquesta obra estan subjectes a la llicència de Creative Commons
:
Reconeixement-NoComercial-SenseObraDerivada 3.0 Espanya
Abstract
The successful application of Reinforcement Learning (RL) techniques to robot control is limited by the fact that, in most robotic tasks, the state and action spaces are continuous, multidimensional, and in essence, too large for conventional RL algorithms to work. The well known curse of dimensionality makes infeasible using a tabular representation of the value function, which is the classical approach that provides convergence guarantees. When a function approximation technique is used to generalize among similar states, the convergence of the algorithm is compromised, since updates unavoidably affect an extended region of the domain, that is, some situations are modified in a way that has not been really experienced, and the update may degrade the approximation. We propose a RL algorithm that uses a probability density estimation in the joint space of states, actions and Q-values as a means of function approximation. This allows us to devise an updating approach that, taking into account the local sampling density, avoids an excessive modification of the approximation far from the observed sample.
CitacióAgostini, A.G.; Celaya, E. Reinforcement learning for robot control using probability density estimations. A: International Conference on Informatics in Control, Automation and Robotics. "7Th International Conference on Informatics in Control, Automation and Robotics". Funchal: INSTICC Press. Institute for Systems and Technologies of Information, Control and Communication, 2010, p. 160-168.
Versió de l'editorhttp://www.icinco.org/Abstracts/2010/ICINCO_2010_Abstracts.htm
Fitxers | Descripció | Mida | Format | Visualitza |
---|---|---|---|---|
draccelaya.pdf | 563,1Kb | Accés restringit |