Depth-supported real-time video segmentation with the Kinect
Document typeConference report
Rights accessRestricted access - publisher's policy
European Commisision's projectINTELLACT - Intelligent observation and execution of Actions and manipulations (EC-FP7-269959)
GARNICS - Gardening with a Cognitive System (EC-FP7-247947)
We present a real-time technique for the spatiotemporal segmentation of color/depth movies. Images are segmented using a parallel Metropolis algorithm implemented on a GPU utilizing both color and depth information, acquired with the Microsoft Kinect. Segments represent the equilibrium states of a Potts model, where tracking of segments is achieved by warping obtained segment labels to the next frame using real-time optical flow, which reduces the number of iterations required for the Metropolis method to encounter the new equilibrium state. By including depth information into the framework, true objects boundaries can be found more easily, improving also the temporal coherency of the method. The algorithm has been tested for videos of medium resolutions showing human manipulations of objects. The framework provides an inexpensive visual front end for visual preprocessing of videos in industrial settings and robot labs which can potentially be used in various applications.
CitationAbramov, A. [et al.]. Depth-supported real-time video segmentation with the Kinect. A: Winter Vision Meeting: Workshop on Applications of Computer Vision. "Proceedings of 2011 Winter Vision Meeting: Workshop on Applications of Computer Vision". Kona: 2011, p. 457-464.
|1273-Depth-supp ... tation-with-the-Kinect.pdf||7,962Mb||Restricted access|