Human-robot collaborative scene mapping from relational descriptions
Document typeConference report
Rights accessOpen Access
In this article we propose a method for cooperatively building a scene map between a human and a robot by using a spatial relational model employed by the robot to interpret human descriptions of the scene. The description will consist in a set of spatial relations between the objects in the scene. The scene map will contain the position of these objects. For this end we propose a model based on the generation of scalar fields of applicability for each of the available relations. The method can be summarized as follows. In first place a person will come into the room and describe the scene to the robot, including in the description semantic information about the objects which the robot can't get from its sensors. From the description the robot will form the scene mental map. In second place the robot will sense the scene with a 2D range laser building the scene sensed map. The objects positions in the mental map will be used to guide the sensing process. In a third step the robot will fuse the two maps, linking the semantic information about the described objects to the corresponding sensed ones. The resulting map is called the scene enriched map.
CitationRetamino, E.; Sanfeliu, A. Human-robot collaborative scene mapping from relational descriptions. A: Iberian Robotics Conference. "ROBOT2013: First Iberian Robotics Conference, Advances in Robotics, Vol.2". Madrid: Springer, 2013, p. 331-346.
- VIS - Visió Artificial i Sistemes Intel.ligents - Ponències/Comunicacions de congressos 
- IRI - Institut de Robòtica i Informàtica Industrial, CSIC-UPC - Ponències/Comunicacions de congressos 
- Departament d'Enginyeria de Sistemes, Automàtica i Informàtica Industrial - Ponències/Comunicacions de congressos