HMM-Based Voice Separation of MIDI Performance

JOURNAL OF NEW MUSIC RESEARCH(2016)

引用 11|浏览10
暂无评分
摘要
Voice separation is an important component of Music Information Retrieval (MIR). In this paper, we present an HMM which can be used to separate music performance data in the form of MIDI into monophonic voices. It works on two basic principles: that consecutive notes within a single voice will tend to occur on similar pitches, and that there are short (if any) temporal gaps between them. We also present an incremental algorithm which can perform inference on the model efficiently. We show that our approach achieves a significant improvement over existing approaches, when run on a corpus of 78 compositions by J.S. Bach, each of which has been separated into the gold standard voices suggested by the original score. We also show that it can be used to perform voice separation on live MIDI data without an appreciable loss in accuracy. The code for the model described here is available at https://github.com/apmcleod/voice-splitting.
更多
查看译文
关键词
voice,perception,music analysis,software
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要