Optimizing Grouped Convolutions on Edge Devices

2020 IEEE 31st International Conference on Application-specific Systems, Architectures and Processors (ASAP)(2020)

引用 26|浏览298
暂无评分
摘要
When deploying a deep neural network on con-strained hardware, it is possible to replace the network’s standard convolutions with grouped convolutions. This allows for substantial memory savings with minimal loss of accuracy. However, current implementations of grouped convolutions in modern deep learning frameworks are far from performing optimally in terms of speed. In this paper we propose Grouped Spatial Pack Convolutions (GSPC), a new implementation of grouped convolutions that outperforms existing solutions. We implement GSPC in TVM, which provides state-of-the-art performance on edge devices. We analyze a set of networks utilizing different types of grouped convolutions and evaluate their performance in terms of inference time on several edge devices. We observe that our new implementation scales well with the number of groups and provides the best inference times in all settings, improving the existing implementations of grouped convolutions in TVM, PyTorch and TensorFlow Lite by $3.4\times, 8\times$ and $ 4\times$ on average respectively. Code is available at https://github.com/gecLAB/tvm-GSPC/
更多
查看译文
关键词
grouped convolutions,edge,devices
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要