Make-Your-Anchor: A Diffusion-based 2D Avatar Generation Framework
CVPR 2024(2024)
摘要
Despite the remarkable process of talking-head-based avatar-creating
solutions, directly generating anchor-style videos with full-body motions
remains challenging. In this study, we propose Make-Your-Anchor, a novel system
necessitating only a one-minute video clip of an individual for training,
subsequently enabling the automatic generation of anchor-style videos with
precise torso and hand movements. Specifically, we finetune a proposed
structure-guided diffusion model on input video to render 3D mesh conditions
into human appearances. We adopt a two-stage training strategy for the
diffusion model, effectively binding movements with specific appearances. To
produce arbitrary long temporal video, we extend the 2D U-Net in the frame-wise
diffusion model to a 3D style without additional training cost, and a simple
yet effective batch-overlapped temporal denoising module is proposed to bypass
the constraints on video length during inference. Finally, a novel
identity-specific face enhancement module is introduced to improve the visual
quality of facial regions in the output videos. Comparative experiments
demonstrate the effectiveness and superiority of the system in terms of visual
quality, temporal coherence, and identity preservation, outperforming SOTA
diffusion/non-diffusion methods. Project page:
.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要