Chrome Extension
WeChat Mini Program
Use on ChatGLM

Sifisinger: A High-Fidelity End-to-End Singing Voice Synthesizer Based on Source-Filter Model.

IEEE International Conference on Acoustics, Speech, and Signal Processing(2024)

Cited 0|Views28
No score
Abstract
This paper presents an advanced end-to-end singing voice synthesis (SVS) system based on the source-filter mechanism that directly translates lyrical and melodic cues into expressive and high-fidelity human-like singing. Similarly to VISinger 2, the proposed system also utilizes training paradigms evolved from VITS and incorporates elements like the fundamental pitch (F0) predictor and waveform generation decoder. To address the issue that the coupling of mel-spectrogram features with F0 information may introduce errors during F0 prediction, we consider two strategies. Firstly, we leverage mel-cepstrum (mcep) features to decouple the intertwined mel-spectrogram and F0 characteristics. Secondly, inspired by the neural source-filter models, we introduce source excitation signals as the representation of F0 in the SVS system, aiming to capture pitch nuances more accurately. Meanwhile, differentiable mcep and F0 losses are employed as the waveform decoder supervision to fortify the prediction accuracy of speech envelope and pitch in the generated speech. Experiments on the Opencpop dataset demonstrate efficacy of the proposed model in synthesis quality and intonation accuracy. Synthesized audio samples are available at: https://sounddemos.github.io/sifisinger.
More
Translated text
Key words
Singing voice synthesis,variational autoencoder,adversarial learning,neural source-filter model,VITS
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined