End-to-end learning for music audio

Acoustics, Speech and Signal Processing(2014)

引用 506|浏览59
暂无评分
摘要
Content-based music information retrieval tasks have traditionally been solved using engineered features and shallow processing architectures. In recent years, there has been increasing interest in using feature learning and deep architectures instead, thus reducing the required engineering effort and the need for prior knowledge. However, this new approach typically still relies on mid-level representations of music audio, e.g. spectrograms, instead of raw audio signals. In this paper, we investigate whether it is possible to apply feature learning directly to raw audio signals. We train convolutional neural networks using both approaches and compare their performance on an automatic tagging task. Although they do not outperform a spectrogram-based approach, the networks are able to autonomously discover frequency decompositions from raw audio, as well as phase-and translation-invariant feature representations.
更多
查看译文
关键词
content-based retrieval,learning (artificial intelligence),music,automatic tagging task,content-based music information retrieval tasks,convolutional neural networks training,end-to-end learning,frequency decompositions,music audio,phase-and translation-invariant feature representations,raw audio,spectrogram-based approach,automatic tagging,convolutional neural networks,end-to-end learning,feature learning,music information retrieval
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要