Coupling camera-tracked humans with a simulated virtual crowd
Document typeConference report
Rights accessOpen Access
Our objective with this paper is to show how we can couple a group of real people and a simulated crowd of virtual humans. We attach group behaviors to the simulated humans to get a plausible reaction to real people. We use a two stage system: in the first stage, a group of people are segmented from a live video, then a human detector algorithm extracts the positions of the people in the video, which are finally used to feed the second stage, the simulation system. The positions obtained by this process allow the second module to render the real humans as avatars in the scene, while the behavior of additional virtual humans is determined by using a simulation based on a social forces model. Developing the method required three specific contributions: a GPU implementation of the codebook algorithm that includes an auxiliary codebook to improve the background subtraction against illumination changes; the use of semantic local binary patterns as a human descriptor; the parallelization of a social forces model, in which we solve a case of agents merging with each other. The experimental results show how a large virtual crowd reacts to over a dozen humans in a real environment.
CitationRivalcoba, J. [et al.]. Coupling camera-tracked humans with a simulated virtual crowd. A: International Conference on Computer Graphics and Applications. "GRAPP 2014: proceedings of the 9th International Conference on Computer Graphics Theory and Applications: Lisbon, Portugal, 5-8 January, 2014". Lisbon: SciTePress, 2014, p. 312-321.