谷歌浏览器插件
订阅小程序
在清言上使用

Patnet : a phoneme-level autoregressive transformer network for speech synthesis

2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021)(2021)

引用 5|浏览29
暂无评分
摘要
Aiming at efficiently predicting acoustic features with high naturalness and robustness, this paper proposes PATNet, a neural acoustic model for speech synthesis using phoneme-level autoregression. PATNet accepts phoneme sequences as input and is built based on Transformer structure. PATNet adopts a duration model instead of attention mechanism for sequence alignment. The decoder of PATNet predicts multi-frame spectra within one phoneme in parallel given the predicted spectra of previous phonemes. Such phoneme-level autoregression enables PATNet to achieve higher inference efficiency than the models with frame-level autoregression, such as Transformer-TTS, and improves the robustness of acoustic feature prediction by utilizing phoneme boundaries explicitly. Experimental results show that the speech synthesized by PATNet obtained lower character error rate (CER) than Tacotron, Transfomer-TTS and FastSpeech when evaluated by a speech recognition engine. Besides, PATNet achieved 10 times faster inference speed than TransformerTTS and significantly better naturalness than FastSpeech.
更多
查看译文
关键词
speech synthesis,sequence-to-sequence,Transformer,phoneme-level autoregression
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要