FastSAG: Towards Fast Non-Autoregressive Singing Accompaniment Generation
arxiv(2024)
摘要
Singing Accompaniment Generation (SAG), which generates instrumental music to
accompany input vocals, is crucial to developing human-AI symbiotic art
creation systems. The state-of-the-art method, SingSong, utilizes a multi-stage
autoregressive (AR) model for SAG, however, this method is extremely slow as it
generates semantic and acoustic tokens recursively, and this makes it
impossible for real-time applications. In this paper, we aim to develop a Fast
SAG method that can create high-quality and coherent accompaniments. A non-AR
diffusion-based framework is developed, which by carefully designing the
conditions inferred from the vocal signals, generates the Mel spectrogram of
the target accompaniment directly. With diffusion and Mel spectrogram modeling,
the proposed method significantly simplifies the AR token-based SingSong
framework, and largely accelerates the generation. We also design semantic
projection, prior projection blocks as well as a set of loss functions, to
ensure the generated accompaniment has semantic and rhythm coherence with the
vocal signal. By intensive experimental studies, we demonstrate that the
proposed method can generate better samples than SingSong, and accelerate the
generation by at least 30 times. Audio samples and code are available at
https://fastsag.github.io/.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要