Seamless Human Motion Composition with Blended Positional Encodings
CVPR 2024(2024)
摘要
Conditional human motion generation is an important topic with many
applications in virtual reality, gaming, and robotics. While prior works have
focused on generating motion guided by text, music, or scenes, these typically
result in isolated motions confined to short durations. Instead, we address the
generation of long, continuous sequences guided by a series of varying textual
descriptions. In this context, we introduce FlowMDM, the first diffusion-based
model that generates seamless Human Motion Compositions (HMC) without any
postprocessing or redundant denoising steps. For this, we introduce the Blended
Positional Encodings, a technique that leverages both absolute and relative
positional encodings in the denoising chain. More specifically, global motion
coherence is recovered at the absolute stage, whereas smooth and realistic
transitions are built at the relative stage. As a result, we achieve
state-of-the-art results in terms of accuracy, realism, and smoothness on the
Babel and HumanML3D datasets. FlowMDM excels when trained with only a single
description per motion sequence thanks to its Pose-Centric Cross-ATtention,
which makes it robust against varying text descriptions at inference time.
Finally, to address the limitations of existing HMC metrics, we propose two new
metrics: the Peak Jerk and the Area Under the Jerk, to detect abrupt
transitions.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要