Mostra el registre d'ítem simple

dc.contributor.authorPontón Martínez, José Luis
dc.contributor.authorYun, Haoran
dc.contributor.authorAndújar Gran, Carlos Antonio
dc.contributor.authorPelechano Gómez, Núria
dc.contributor.otherUniversitat Politècnica de Catalunya. Departament de Ciències de la Computació
dc.contributor.otherUniversitat Politècnica de Catalunya. Doctorat en Computació
dc.date.accessioned2023-03-21T09:05:32Z
dc.date.available2023-03-21T09:05:32Z
dc.date.issued2022-12
dc.identifier.citationPonton, J.L. [et al.]. Combining motion matching and orientation prediction to animate avatars for consumer-grade VR devices. "Computer graphics forum", Desembre 2022, vol. 41, núm. 8, p. 107-118.
dc.identifier.issn1467-8659
dc.identifier.urihttp://hdl.handle.net/2117/385236
dc.description.abstractThe animation of user avatars plays a crucial role in conveying their pose, gestures, and relative distances to virtual objects or other users. Self-avatar animation in immersive VR helps improve the user experience and provides a Sense of Embodiment. However, consumer-grade VR devices typically include at most three trackers, one at the Head Mounted Display (HMD), and two at the handheld VR controllers. Since the problem of reconstructing the user pose from such sparse data is ill-defined, especially for the lower body, the approach adopted by most VR games consists of assuming the body orientation matches that of the HMD, and applying animation blending and time-warping from a reduced set of animations. Unfortunately, this approach produces noticeable mismatches between user and avatar movements. In this work we present a new approach to animate user avatars that is suitable for current mainstream VR devices. First, we use a neural network to estimate the user's body orientation based on the tracking information from the HMD and the hand controllers. Then we use this orientation together with the velocity and rotation of the HMD to build a feature vector that feeds a Motion Matching algorithm. We built a MoCap database with animations of VR users wearing a HMD and used it to test our approach on both self-avatars and other users’ avatars. Our results show that our system can provide a large variety of lower body animations while correctly matching the user orientation, which in turn allows us to represent not only forward movements but also stepping in any direction.
dc.description.sponsorshipThis project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 860768 (CLIPE project) and the Spanish Ministry of Science and Innovation (PID2021-122136OB-C21).
dc.format.extent12 p.
dc.language.isoeng
dc.rightsAttribution-NonCommercial 4.0 International
dc.rights.urihttp://creativecommons.org/licenses/by-nc/4.0/
dc.subjectÀrees temàtiques de la UPC::Informàtica::Infografia
dc.subject.lcshComputer animation
dc.subject.lcshVirtual reality
dc.subject.lcshAvatars (Virtual reality)
dc.subject.otherSelf-avatars
dc.subject.otherUser models
dc.subject.otherMotion Capture
dc.titleCombining motion matching and orientation prediction to animate avatars for consumer-grade VR devices
dc.typeArticle
dc.subject.lemacAnimació per ordinador
dc.subject.lemacRealitat virtual
dc.subject.lemacAvatars (Realitat virtual)
dc.contributor.groupUniversitat Politècnica de Catalunya. ViRVIG - Grup de Recerca en Visualització, Realitat Virtual i Interacció Gràfica
dc.identifier.doi10.1111/cgf.14628
dc.description.peerreviewedPeer Reviewed
dc.relation.publisherversionhttps://onlinelibrary.wiley.com/doi/10.1111/cgf.14628
dc.rights.accessOpen Access
local.identifier.drac35243514
dc.description.versionPostprint (published version)
dc.relation.projectidinfo:eu-repo/grantAgreement/EC/H2020/860768/EU/Creating Lively Interactive Populated Environments/CLIPE
dc.relation.projectidinfo:eu-repo/grantAgreement/AEI/Plan Estatal de Investigación Científica y Técnica y de Innovación 2017-2020/PID2021-122136OB-C21/ES/Entornos 3D de alta fidelidad para Realidad Virtual y Computación Visual: geometría, movimiento, interacción y visualización para salud, arquitectura y ciudades/
local.citation.authorPonton, J.L.; Yun, H.; Andujar, C.; Pelechano, N.
local.citation.publicationNameComputer graphics forum
local.citation.volume41
local.citation.number8
local.citation.startingPage107
local.citation.endingPage118


Fitxers d'aquest items

Thumbnail

Aquest ítem apareix a les col·leccions següents

Mostra el registre d'ítem simple