Adaptable multimodal interaction framework for robot-assisted cognitive training
Document typeConference report
Rights accessOpen Access
The size of the population with cognitive impairment is increasing worldwide, and socially assistive robotics offers a solution to the growing demand for professional carers. Adaptation to users generates more natural, human-like behavior that may be crucial for a wider robot acceptance. The focus of this work is on robot-assisted cognitive training of the patients that suffer from mild cognitive impairment (MCI) or Alzheimer. We propose a framework that adjusts the level of robot assistance and the way the robot actions are executed, according to the user input. The actions can be performed using any of the following modalities: speech, gesture, and display, or their combination. The choice of modalities depends on the availability of the required resources. The memory state of the user was implemented as a Hidden Markov Model, and it was used to determine the level of robot assistance. A pilot user study was performed to evaluate the effects of the proposed framework on the quality of interaction with the robot.
CitationTaranovic, A., Jevtic, A., Torras, C. Adaptable multimodal interaction framework for robot-assisted cognitive training. A: ACM/IEEE International Conference on Human-Robot Interaction. "Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction". 2018, p. 327-328.
- IRI - Institut de Robòtica i Informàtica Industrial, CSIC-UPC - Ponències/Comunicacions de congressos 
- Departament d'Enginyeria de Sistemes, Automàtica i Informàtica Industrial - Ponències/Comunicacions de congressos 
- ROBiri - Grup de Robòtica de l'IRI - Ponències/Comunicacions de congressos