Optimizing Depthwise Convolutions on ARMv8 Architecture

Parallel and Distributed Computing, Applications and Technologies(2023)

引用 1|浏览6
暂无评分
摘要
Depthwise convolutions are widely used in lightweight convolutional neural networks (CNNs). The performance of depthwise convolutions is mainly bounded by the memory access rather than the arithmetic operations for classic convolutions so that direct algorithms are often more efficient than indirect ones (matrix multiplication-, Winograd-, and FFT-based convolutions) with additional memory accesses. However, the existing direct implementations of depthwise convolutions on ARMv8 architectures feature a bad trade-off between register-level reuse of different tensors, which usually leads to sub-optimal performance. In this paper, we propose a new direct implementation of depthwise convolutions by means of implicit padding, register tiling, etc. Compared to the existing ones, our new implementations can incur much less communication overhead between registers and cache. Experimental results on two ARMv8 CPUs show that our implementation can averagely deliver 4.88 $$\times $$ performance improvement over the existing direct ones in open-source libraries.
更多
查看译文
关键词
depthwise convolutions,architecture
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要