Evaluating Text-to-Speech Synthesis from a Large Discrete Token-based Speech Language Model
International Conference on Computational Linguistics(2024)
Abstract
Recent advances in generative language modeling applied to discrete speech
tokens presented a new avenue for text-to-speech (TTS) synthesis. These speech
language models (SLMs), similarly to their textual counterparts, are scalable,
probabilistic, and context-aware. While they can produce diverse and natural
outputs, they sometimes face issues such as unintelligibility and the inclusion
of non-speech noises or hallucination. As the adoption of this innovative
paradigm in speech synthesis increases, there is a clear need for an in-depth
evaluation of its capabilities and limitations. In this paper, we evaluate TTS
from a discrete token-based SLM, through both automatic metrics and listening
tests. We examine five key dimensions: speaking style, intelligibility, speaker
consistency, prosodic variation, spontaneous behaviour. Our results highlight
the model's strength in generating varied prosody and spontaneous outputs. It
is also rated higher in naturalness and context appropriateness in listening
tests compared to a conventional TTS. However, the model's performance in
intelligibility and speaker consistency lags behind traditional TTS.
Additionally, we show that increasing the scale of SLMs offers a modest boost
in robustness. Our findings aim to serve as a benchmark for future advancements
in generative SLMs for speech synthesis.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined