Mostra el registre d'ítem simple

dc.contributor.authorMartínez Martínez, David
dc.contributor.authorAlenyà Ribas, Guillem
dc.contributor.authorTorras, Carme
dc.contributor.otherInstitut de Robòtica i Informàtica Industrial
dc.date.accessioned2016-04-06T17:46:56Z
dc.date.available2016-04-06T17:46:56Z
dc.date.issued2015
dc.identifier.citationMartínez, D., Alenyà, G., Torras, C. Safe robot execution in model-based reinforcement learning. A: IEEE/RSJ International Conference on Intelligent Robots and Systems. "2015 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2015, Hamburg, Germany, September 28-October 2, 2015". Hamburg: Institute of Electrical and Electronics Engineers (IEEE), 2015, p. 6422-6427.
dc.identifier.isbn978-1-4799-9994-1
dc.identifier.urihttp://hdl.handle.net/2117/85331
dc.description.abstractTask learning in robotics requires repeatedly executing the same actions in different states to learn the model of the task. However, in real-world domains, there are usually sequences of actions that, if executed, may produce unrecoverable errors (e.g. breaking an object). Robots should avoid repeating such errors when learning, and thus explore the state space in a more intelligent way. This requires identifying dangerous action effects to avoid including such actions in the generated plans, while at the same time enforcing that the learned models are complete enough for the planner not to fall into dead-ends. We thus propose a new learning method that allows a robot to reason about dead-ends and their causes. Some such causes may be dangerous action effects (i.e., leading to unrecoverable errors if the action were executed in the given state) so that the method allows the robot to skip the exploration of risky actions and guarantees the safety of planned actions. If a plan might lead to a dead-end (e.g., one that includes a dangerous action effect), the robot tries to find an alternative safe plan and, if not found, it actively asks a teacher whether the risky action should be executed. This method permits learning safe policies as well as minimizing unrecoverable errors during the learning process. Experimental validation of the approach is provided in two different scenarios: a robotic task and a simulated problem from the international planning competition. Our approach greatly increases success ratios in problems where previous approaches had high probabilities of failing.
dc.format.extent6 p.
dc.language.isoeng
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/3.0/es/
dc.subjectÀrees temàtiques de la UPC::Informàtica::Intel·ligència artificial
dc.subject.otherlearning (artificial intelligence)
dc.subject.othermanipulators
dc.subject.otherplanning (artificial intelligence).
dc.titleSafe robot execution in model-based reinforcement learning
dc.typeConference report
dc.contributor.groupUniversitat Politècnica de Catalunya. ROBiri - Grup de Robòtica de l'IRI
dc.identifier.doi10.1109/IROS.2015.7354295
dc.description.peerreviewedPeer Reviewed
dc.subject.inspecClassificació INSPEC::Cybernetics::Artificial intelligence::Learning (artificial intelligence)
dc.relation.publisherversionhttp://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7354295
dc.rights.accessOpen Access
local.identifier.drac17420139
dc.description.versionPostprint (author's final draft)
local.citation.authorMartínez, D.; Alenyà, G.; Torras, C.
local.citation.contributorIEEE/RSJ International Conference on Intelligent Robots and Systems
local.citation.pubplaceHamburg
local.citation.publicationName2015 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2015, Hamburg, Germany, September 28-October 2, 2015
local.citation.startingPage6422
local.citation.endingPage6427


Fitxers d'aquest items

Thumbnail

Aquest ítem apareix a les col·leccions següents

Mostra el registre d'ítem simple