Text-driven multi-human motion generation
Títol de la revista
ISSN de la revista
Títol del volum
Autors
Correu electrònic de l'autor
Tutor / director
Tribunal avaluador
Realitzat a/amb
Tipus de document
Data
Condicions d'accés
item.page.rightslicense
Publicacions relacionades
Datasets relacionats
Projecte CCD
Abstract
The task of generating human-to-human interactions presents a significant challenge, primarily due to the intricate dynamics involved in these interactions. The complexity of learning these dynamics is compounded by the vast array of possible combinations found in human motion generation. Moreover, a key aspect of generation involves conditioning the output, often through natural language, which, while increasing the complexity, simultaneously makes the approach more accessible. In this thesis, we introduce a novel Diffusion Model incorporating a Transformer-based architecture. This model is conditioned using textual descriptions of both the motion interactions and the individual motions within these interactions. By focusing on the individual components of the interaction, our method achieves more precise conditioning in the generation of these specific motions. Concurrently, the textual descriptions of the overall interaction enable our model to effectively capture the interplay between individual motions. Our approach has been rigorously evaluated using the InterHuman dataset, demonstrating an enhancement over the results achieved by preceding methodologies. Additionally, this thesis contributes to the field through the development of a new Motion-to-Text methodology, the implementation of an innovative multi-weight sampling technique, and the utilization of Large Language Models to augment textual descriptions from motion datasets.

