MotionChain: Conversational Motion Controllers via Multimodal Prompts
CoRR(2024)
摘要
Recent advancements in language models have demonstrated their adeptness in
conducting multi-turn dialogues and retaining conversational context. However,
this proficiency remains largely unexplored in other multimodal generative
models, particularly in human motion models. By integrating multi-turn
conversations in controlling continuous virtual human movements, generative
human motion models can achieve an intuitive and step-by-step process of human
task execution for humanoid robotics, game agents, or other embodied systems.
In this work, we present MotionChain, a conversational human motion controller
to generate continuous and long-term human motion through multimodal prompts.
Specifically, MotionChain consists of multi-modal tokenizers that transform
various data types such as text, image, and motion, into discrete tokens,
coupled with a Vision-Motion-aware Language model. By leveraging large-scale
language, vision-language, and vision-motion data to assist motion-related
generation tasks, MotionChain thus comprehends each instruction in multi-turn
conversation and generates human motions followed by these prompts. Extensive
experiments validate the efficacy of MotionChain, demonstrating
state-of-the-art performance in conversational motion generation, as well as
more intuitive manners of controlling and interacting with virtual humans.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要