谷歌浏览器插件
订阅小程序
在清言上使用

RL-Chord: CLSTM-Based Melody Harmonization Using Deep Reinforcement Learning

IEEE Trans Neural Networks Learn Syst(2024)

引用 3|浏览45
暂无评分
摘要
Automatic music generation is the combination of artificial intelligence and art, in which melody harmonization is a significant and challenging task. However, previous recurrent neural network (RNN)-based work fails to maintain long-term dependency and neglects the guidance of music theory. In this article, we first devise a universal chord representation with a fixed small dimension, which can cover most existing chords and is easy to expand. Then a novel melody harmonization system based on reinforcement learning (RL), RL-Chord, is proposed to generate high-quality chord progressions. Specifically, a melody conditional LSTM (CLSTM) model is put forward that learns the transition and duration of chords well, based on which RL algorithms with three well-designed reward modules are combined to construct RL-Chord. We compare three widely used RL algorithms (i.e., policy gradient, Q -learning, and actor-critic algorithms) on the melody harmonization task for the first time and prove the superiority of deep Q -network (DQN). Furthermore, a style classifier is devised to fine-tune the pretrained DQN-Chord for zero-shot Chinese folk (CF) melody harmonization. Experimental results demonstrate that the proposed model can generate harmonious and fluent chord progressions for diverse melodies. Quantitatively, DQN-Chord achieves better performance than the compared methods on multiple evaluation metrics, such as chord histogram similarity (CHS), chord tonal distance (CTD), and melody-chord tonal distance (MCTD).
更多
查看译文
关键词
Hidden Markov models,Task analysis,Training,Maximum likelihood estimation,Measurement,Deep learning,Q-learning,Deep reinforcement learning (RL),long short-term memory,melody harmonization with chords,symbolic music generation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要