Mostra el registre d'ítem simple

dc.contributor.authorTaranovic, Aleksandar
dc.contributor.authorJevtic, Aleksandar
dc.contributor.authorTorras, Carme
dc.contributor.otherUniversitat Politècnica de Catalunya. Departament d'Enginyeria de Sistemes, Automàtica i Informàtica Industrial
dc.contributor.otherInstitut de Robòtica i Informàtica Industrial
dc.date.accessioned2018-07-09T10:12:28Z
dc.date.available2018-07-09T10:12:28Z
dc.date.issued2018
dc.identifier.citationTaranovic, A., Jevtic, A., Torras, C. Adaptable multimodal interaction framework for robot-assisted cognitive training. A: ACM/IEEE International Conference on Human-Robot Interaction. "Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction". 2018, p. 327-328.
dc.identifier.urihttp://hdl.handle.net/2117/119140
dc.description.abstractThe size of the population with cognitive impairment is increasing worldwide, and socially assistive robotics offers a solution to the growing demand for professional carers. Adaptation to users generates more natural, human-like behavior that may be crucial for a wider robot acceptance. The focus of this work is on robot-assisted cognitive training of the patients that suffer from mild cognitive impairment (MCI) or Alzheimer. We propose a framework that adjusts the level of robot assistance and the way the robot actions are executed, according to the user input. The actions can be performed using any of the following modalities: speech, gesture, and display, or their combination. The choice of modalities depends on the availability of the required resources. The memory state of the user was implemented as a Hidden Markov Model, and it was used to determine the level of robot assistance. A pilot user study was performed to evaluate the effects of the proposed framework on the quality of interaction with the robot.
dc.format.extent2 p.
dc.language.isoeng
dc.rightsAttribution-NonCommercial-NoDerivs 3.0 Spain
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/3.0/es/
dc.subjectÀrees temàtiques de la UPC::Informàtica::Robòtica
dc.subject.otherintelligent robots
dc.subject.otherhuman-robot interaction
dc.subject.othersocial robotics
dc.titleAdaptable multimodal interaction framework for robot-assisted cognitive training
dc.typeConference report
dc.contributor.groupUniversitat Politècnica de Catalunya. ROBiri - Grup de Robòtica de l'IRI
dc.identifier.doi10.1145/3173386.3176911
dc.description.peerreviewedPeer Reviewed
dc.subject.inspecClassificació INSPEC::Automation::Robots
dc.relation.publisherversionhttps://dl.acm.org/citation.cfm?doid=3173386.3176911
dc.rights.accessOpen Access
local.identifier.drac23227844
dc.description.versionPostprint (author's final draft)
local.citation.authorTaranovic, A.; Jevtic, A.; Torras, C.
local.citation.contributorACM/IEEE International Conference on Human-Robot Interaction
local.citation.publicationNameProceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction
local.citation.startingPage327
local.citation.endingPage328


Fitxers d'aquest items

Thumbnail

Aquest ítem apareix a les col·leccions següents

Mostra el registre d'ítem simple