Music Consistency Models
arxiv(2024)
摘要
Consistency models have exhibited remarkable capabilities in facilitating
efficient image/video generation, enabling synthesis with minimal sampling
steps. It has proven to be advantageous in mitigating the computational burdens
associated with diffusion models. Nevertheless, the application of consistency
models in music generation remains largely unexplored. To address this gap, we
present Music Consistency Models (), which leverages the
concept of consistency models to efficiently synthesize mel-spectrogram for
music clips, maintaining high quality while minimizing the number of sampling
steps. Building upon existing text-to-music diffusion models, the
model incorporates consistency distillation and adversarial
discriminator training. Moreover, we find it beneficial to generate extended
coherent music by incorporating multiple diffusion processes with shared
constraints. Experimental results reveal the effectiveness of our model in
terms of computational efficiency, fidelity, and naturalness. Notable,
achieves seamless music synthesis with a mere four sampling
steps, e.g., only one second per minute of the music clip, showcasing the
potential for real-time application.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要