DiffSpeaker: Speech-Driven 3D Facial Animation with Diffusion Transformer
CoRR(2024)
摘要
Speech-driven 3D facial animation is important for many multimedia
applications. Recent work has shown promise in using either Diffusion models or
Transformer architectures for this task. However, their mere aggregation does
not lead to improved performance. We suspect this is due to a shortage of
paired audio-4D data, which is crucial for the Transformer to effectively
perform as a denoiser within the Diffusion framework. To tackle this issue, we
present DiffSpeaker, a Transformer-based network equipped with novel biased
conditional attention modules. These modules serve as substitutes for the
traditional self/cross-attention in standard Transformers, incorporating
thoughtfully designed biases that steer the attention mechanisms to concentrate
on both the relevant task-specific and diffusion-related conditions. We also
explore the trade-off between accurate lip synchronization and non-verbal
facial expressions within the Diffusion paradigm. Experiments show our model
not only achieves state-of-the-art performance on existing benchmarks, but also
fast inference speed owing to its ability to generate facial motions in
parallel.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要