Self-Attention for Audio Super-Resolution

2021 IEEE 31st International Workshop on Machine Learning for Signal Processing (MLSP)(2021)

引用 4|浏览17
暂无评分
摘要
Convolutions operate only locally, thus failing to model global interactions. Self-attention is, however, able to learn representations that capture long-range dependencies in sequences. We propose a network architecture for audio super-resolution that combines convolution and self-attention. Attention-based Feature-Wise Linear Modulation (AFiLM) uses self-attention mechanism instead of recurrent ...
更多
查看译文
关键词
Training,Recurrent neural networks,Convolution,Superresolution,Modulation,Machine learning,Network architecture
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要