A Framework for Neural Network Architecture and Compile Co-optimization

ACM Transactions on Embedded Computing Systems(2023)

引用 2|浏览32
暂无评分
摘要
AbstractThe efficiency of deep neural network (DNN) solutions on real hardware devices are mainly decided by the DNN architecture and the compiler-level scheduling strategy on the hardware. When we try to fully exploit the underlying hardware and obtain the optimal tradeoff between DNN accuracy and runtime performance, we discovered that the two optimization goals of DNN architecture and scheduling policy are intimately related to each other. However, current hardware-aware Neural Architecture Search (NAS) methods primarily focus on the DNN architecture search process, ignoring the effects of various compiler-level scheduling strategies (e.g., graph-level optimization, loop transformations, parallelization, etc.) on network candidates being evaluated in the search process. As a result, they may overlook the true-optimal DNN implementations on hardware, which can only be discovered by trying-out different combinations of scheduling strategies and DNN architectures. This work proposes a NAS framework (CHaNAS) that searches for not only the network architecture but also the dedicated compiler-level scheduling policy, as the optimal co-design solution on the target hardware. We propose to use a block-based pre-scheduling methodology to reduce the co-design search space and enable the automatic generation of the optimal co-design, including the network architecture and the tensor programs that practice the scheduling policy. Further, we introduce a new search objective function based on the generalization gap to prevent the selection of architectures that are prone to overfitting. We evaluate CHaNAS on Imagenet on different hardware back-ends against the state-of-the-art hardware-aware search method based on the MobileNet-v3 search space. Experimental results show that the co-design solutions obtained by ChaNAS show up to 1.6×, 1.9×, and 1.7×, 24 performance boost on NVIDIA P100 GPU, Intel Xeon 8163 CPU, and Samsung Note 10 Mobile, respectively, over the baselines of the same-level accuracy.
更多
查看译文
关键词
DNN-scheduling Co-design,hardware-aware neural architecture search,compiler optimization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要