A deep representation for invariance and music classification

ICASSP(2014)

引用 32|浏览81
暂无评分
摘要
Representations in the auditory cortex might be based on mechanisms similar to the visual ventral stream; modules for building invariance to transformations and multiple layers for compositionality and selectivity. In this paper we propose the use of such computational modules for extracting invariant and discriminative audio representations. Building on a theory of invariance in hierarchical architectures, we propose a novel, mid-level representation for acoustical signals, using the empirical distributions of projections on a set of templates and their transformations. Under the assumption that, by construction, this dictionary of templates is composed from similar classes, and samples the orbit of variance-inducing signal transformations (such as shift and scale), the resulting signature is theoretically guaranteed to be unique, invariant to transformations and stable to deformations. Modules of projection and pooling can then constitute layers of deep networks, for learning composite representations. We present the main theoretical and computational aspects of a framework for unsupervised learning of invariant audio representations, empirically evaluated on music genre classification.
更多
查看译文
关键词
convolutional networks,signal representation,compositionality,invariance theory,acoustical signal mid-level representation,auditory cortex,music,deep learning,variance-inducing signal transformation,music genre classification,audio representation extraction,music classification,acoustic signal processing,hierarchical architectures,signal classification,deep representation,projection module,pooling module,invariance,selectivity,unsupervised learning,visual ventral stream,multiple signal classification,computer architecture,scattering
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要