Mostra el registre d'ítem simple

dc.contributor.authorColomé Figueras, Adrià
dc.contributor.authorPlanells Valencia, Antoni
dc.contributor.authorTorras, Carme
dc.contributor.otherInstitut de Robòtica i Informàtica Industrial
dc.date.accessioned2016-02-26T19:05:23Z
dc.date.available2016-02-26T19:05:23Z
dc.date.issued2015
dc.identifier.citationColomé, A., Planells, A., Torras, C. A friction-model-based framework for reinforcement learning of robotic tasks in non-rigid environments. A: IEEE International Conference on Robotics and Automation. "2015 IEEE International Conference on Robotics and Automation (ICRA 2015): Seattle, Washington, USA, 26-30 May 2015". Seattle, WA: Institute of Electrical and Electronics Engineers (IEEE), 2015, p. 5649-5654.
dc.identifier.isbn978-1-4799-6924-1
dc.identifier.urihttp://hdl.handle.net/2117/83513
dc.description.abstractLearning motion tasks in a real environment with deformable objects requires not only a Reinforcement Learning (RL) algorithm, but also a good motion characterization, a preferably compliant robot controller, and an agent giving feedback for the rewards/costs in the RL algorithm. In this paper, we unify all these parts in a simple but effective way to properly learn safety-critical robotic tasks such as wrapping a scarf around the neck (so far, of a mannequin). We found that a suitable compliant controller ought to have a good Inverse Dynamic Model (IDM) of the robot. However, most approaches to build such a model do not consider the possibility of having hystheresis on the friction, which is the case for robots such as the Barrett WAM. For this reason, in order to improve the available IDM, we derived an analytical model of friction in the seven robot joints, whose parameters can be automatically tuned for each particular robot. This permits compliantly tracking diverse trajectories in the whole workspace. By using such friction-aware controller, Dynamic Movement Primitives (DMP) as motion characterization and visual/force feedback within the RL algorithm, experimental results demonstrate that the robot is consistently capable of learning such safety-critical tasks.
dc.format.extent6 p.
dc.language.isoeng
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)
dc.subjectÀrees temàtiques de la UPC::Informàtica::Robòtica
dc.subject.otherIntelligent robots
dc.subject.otherlearning (artificial intelligence)
dc.subject.othermanipulators
dc.subject.otherrobot dynamics
dc.subject.otherreinforcement learning
dc.subject.otherdynamic models
dc.titleA friction-model-based framework for reinforcement learning of robotic tasks in non-rigid environments
dc.typeConference report
dc.contributor.groupUniversitat Politècnica de Catalunya. ROBiri - Grup de Robòtica de l'IRI
dc.identifier.doi10.1109/ICRA.2015.7139990
dc.description.peerreviewedPeer Reviewed
dc.subject.inspecClassificació INSPEC::Automation::Robots::Intelligent robots
dc.relation.publisherversionhttp://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=7139990
dc.rights.accessOpen Access
local.identifier.drac17087076
dc.description.versionPostprint (author's final draft)
local.citation.authorColomé, A.; Planells, A.; Torras, C.
local.citation.contributorIEEE International Conference on Robotics and Automation
local.citation.pubplaceSeattle, WA
local.citation.publicationName2015 IEEE International Conference on Robotics and Automation (ICRA 2015): Seattle, Washington, USA, 26-30 May 2015
local.citation.startingPage5649
local.citation.endingPage5654


Fitxers d'aquest items

Thumbnail

Aquest ítem apareix a les col·leccions següents

Mostra el registre d'ítem simple