FlashSpeech: Efficient Zero-Shot Speech Synthesis
CoRR(2024)
摘要
Recent progress in large-scale zero-shot speech synthesis has been
significantly advanced by language models and diffusion models. However, the
generation process of both methods is slow and computationally intensive.
Efficient speech synthesis using a lower computing budget to achieve quality on
par with previous work remains a significant challenge. In this paper, we
present FlashSpeech, a large-scale zero-shot speech synthesis system with
approximately 5% of the inference time compared with previous work.
FlashSpeech is built on the latent consistency model and applies a novel
adversarial consistency training approach that can train from scratch without
the need for a pre-trained diffusion model as the teacher. Furthermore, a new
prosody generator module enhances the diversity of prosody, making the rhythm
of the speech sound more natural. The generation processes of FlashSpeech can
be achieved efficiently with one or two sampling steps while maintaining high
audio quality and high similarity to the audio prompt for zero-shot speech
generation. Our experimental results demonstrate the superior performance of
FlashSpeech. Notably, FlashSpeech can be about 20 times faster than other
zero-shot speech synthesis systems while maintaining comparable performance in
terms of voice quality and similarity. Furthermore, FlashSpeech demonstrates
its versatility by efficiently performing tasks like voice conversion, speech
editing, and diverse speech sampling. Audio samples can be found in
https://flashspeech.github.io/.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要