Recerca bàsica i aplicada al desenvolupament de sistemes intel•ligents capaços d'interactuar amb el món de forma autònoma i ubica. Aquests sistemes hauran de percebre, raonar, planificar, actuar i aprendre de l'experiència prèvia. El grup treballa activament en tres àrees: robòtica social i aèria ; visió per computador i reconeixement estructural de patrons. Dins de la robòtica social s’està treballant en els següents temes: Interacció robot-humà; localització i navegació social i autònoma de robots; localització i construcció simultània de mapes (SLAM); robòtica ubiqua; robòtica mòbil cooperativa; i robòtica aèria. En visió per computador es treballa en: seguiment, identificació i reconeixement de objectes; en xarxes de sensors de càmeres; fusió de dades; i percepció cooperativa. I en reconeixement estructural de patrons s'està treballant en mètodes de síntesi i coincidència de grafs i el la seva aplicació a la robòtica.

The Artificial Vision and Intelligent Systems Group (VIS) carries out basic and applied research with the aim of understanding and designing intelligent systems that are capable of interacting with the real world in an autonomous and wide-reaching manner. Such intelligent systems must perceive, reason, plan, act and learn from previous experiences. The group works on the following topics: robust colour image segmentation and labelling, pattern recognition, viewpoint invariant object learning and recognition, object tracking, face tracking, biometrics, processing and analysis of medical images for diagnosis, document analysis, mobile robot navigation, simultaneous localisation and map building, visual servoing, and human-computer interaction. The possible areas of application of the VIS?s research include the automotive and transport industry, the biomedical imaging industry, the space industry, robotics applications, security, home and office automation, the entertainment industry, and future computing enviro

The Artificial Vision and Intelligent Systems Group (VIS) carries out basic and applied research with the aim of understanding and designing intelligent systems that are capable of interacting with the real world in an autonomous and wide-reaching manner. Such intelligent systems must perceive, reason, plan, act and learn from previous experiences. The group works on the following topics: robust colour image segmentation and labelling, pattern recognition, viewpoint invariant object learning and recognition, object tracking, face tracking, biometrics, processing and analysis of medical images for diagnosis, document analysis, mobile robot navigation, simultaneous localisation and map building, visual servoing, and human-computer interaction. The possible areas of application of the VIS?s research include the automotive and transport industry, the biomedical imaging industry, the space industry, robotics applications, security, home and office automation, the entertainment industry, and future computing enviro

Recent Submissions

  • Unsupervised image-to-video clothing transfer 

    Pumarola Peris, Albert; Goswami, Vedanuj; Vicente, Francisco; De La Torre, Fernando; Moreno-Noguer, Francesc (2019)
    Conference report
    Open Access
    We present a system to photo-realistically transfer the clothing of a person in a reference image into another person in an unconstrained image or video. Our architecture is based on a GAN equipped with a physical memory ...
  • 3DPeople: modeling the geometry of dressed humans 

    Pumarola Peris, Albert; Sánchez Riera, Jordi; Choi, Gary; Sanfeliu Cortés, Alberto; Moreno-Noguer, Francesc (2019)
    Conference report
    Open Access
    Recent advances in 3D human shape estimation build upon parametric representations that model very well the shape of the naked body, but are not appropriate to represent the clothing geometry. In this paper, we present an ...
  • Human-robot collaborative navigation search using social reward sources 

    Dalmasso Blanch, Marc; Garrell Zulueta, Anais; Jimenez Schlegl, Pablo; Sanfeliu Cortés, Alberto (2019)
    Conference report
    Open Access
    This paper proposes a Social Reward Sources (SRS) design for a Human-Robot Collaborative Navigation (HRCN) task: human-robot collaborative search. It is a flexible approach capable of handling the collaborative task, ...
  • Effects of a social force model reward in robot navigation based on deep reinforcement learning 

    Gil Viyuela, Óscar; Sanfeliu Cortés, Alberto (2019)
    Conference report
    Open Access
    In this paper is proposed an inclusion of the Social Force Model (SFM) into a concrete Deep Reinforcement Learning (RL) framework for robot navigation. These types of techniques have demonstrated to be useful to deal with ...
  • Guiding and localising in real-time a mobile robot with a monocular camera in non-flat terrains 

    Vidal Calleja, Teresa; Sanfeliu Cortés, Alberto; Andrade-Cetto, Juan
    Conference report
    Open Access
    In this paper we present a real-time active motion strategy for a mobile robot navigating in a non-¿at terrain and its 3D constrained motion model. The aim is to control the robot with measurements from only one camera ...
  • Online learning and detection of faces with low human supervision 

    Villamizar Vergel, Michael Alejandro; Sanfeliu Cortés, Alberto; Moreno-Noguer, Francesc (Springer, 2019)
    Article
    Open Access
    We present an efficient,online,and interactive approach for computing a classifier, called Wild Lady Ferns (WiLFs), for face learning and detection using small human supervision. More precisely, on the one hand, WiLFs ...
  • Odometry estimation for aerial manipulation 

    Santamaria Navarro, Àngel; Solà Ortega, Joan; Andrade-Cetto, Juan (Springer, 2019)
    Part of book or chapter of book
    Open Access
    This chapter explains a fast and low-cost state localization estimation method for small-sized UAVs, that uses an IMU, a smart camera and an infrared time-of-flight range sensor that act as an odometer providing absolute ...
  • Perception for detection and grasping 

    Guerra Paradas, Edmundo; Pumarola Peris, Albert; Grau Saldes, Antoni; Sanfeliu Cortés, Alberto (Springer, 2019)
    Part of book or chapter of book
    Open Access
    This research presents a methodology for the detection of the crawler used in the project AEROARMS. The approach consisted on using a two-step progressive strategy, going from rough detection and tracking, for approximation ...
  • Precise localization for aerial inspection using augmented reality markers 

    Amor Martínez, Adrián; Ruiz García, Alberto; Moreno-Noguer, Francesc; Sanfeliu Cortés, Alberto (Springer, 2019)
    Part of book or chapter of book
    Open Access
    This chapter is devoted to explaining a method for precise localization using augmented reality markers. This method can achieve precision of less of 5 mm in position at a distance of 0.7 m, using a visual mark of 17 mm × ...
  • Visual servoing of aerial manipulators 

    Santamaria Navarro, Àngel; Andrade-Cetto, Juan; Lippiello, Vincenzo (Springer, 2019)
    Part of book or chapter of book
    Open Access
    This chapter describes the classical techniques to control an aerial manipulator by means of visual information and presents an uncalibrated image-based visual servo method to drive the aerial vehicle. The proposed technique ...
  • Timed-elastic bands for manipulation motion planning 

    Magyar, Bence; Tsiogkas, Nikolaos; Deray, Jeremie; Pfeiffer, Sammy; Lane, David (Institute of Electrical and Electronics Engineers (IEEE), 2019-10-01)
    Article
    Open Access
    Motion planning is one of the main problems studied in the field of robotics. However, it is still challenging for the state-of-the-art methods to handle multiple conditions that allow better paths to be found. For example, ...
  • Galileo and EGNOS as an asset for UTM safety and security 

    Jimenez, Adrian; Andrade-Cetto, Juan; Tesfai, Ivan; Dontas, Ioannis; Capitan, Carlos; Oliveres, Enric; Jia, Huamin; Kostaridis, Antonis (2019)
    Conference report
    Open Access
    GAUSS (Galileo-EGNOS as an Asset for UTM Safety and Security) is a H2020 project1 that aims at designing and developing high performance positioning systems for drones within the U-Space framework focusing on UAS (Unmanned ...

View more