U-DiT TTS: U-Diffusion Vision Transformer for Text-to-Speech

CoRR(2023)

引用 1|浏览35
暂无评分
摘要
Recently, the adoption of Score-based Generative Models (SGMs), literally Diffusion Probabilistic Models (DPMs), has gained traction due to their ability to produce highquality synthesized neural speech in neural synthesis systems. In SGMs, the U-Net architecture and its variants have long dominated as the backbone since its first successful adoption. In this research, we propose the U-DiT architecture, exploring the potential of vision transformer architecture as the core component of the diffusion models in a TTS system. The proposed U-DiT TTS system, inherited from the best parts of U-Net and ViT, allows for great scalability and versatility across different data scales and utilizes a pretrained HiFi-GAN as the vocoder. The objective (i. e., Frechet distance) and MOS results demonstrate that our U-DiT TTS system achieves competitive performance on the single-speaker dataset LJSpeech. Our demos are publicly available at: https://eihw.github.io/u-dit-tts/
更多
查看译文
关键词
u-dit,u-diffusion,text-to-speech
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要