An Energy-Efficient Quantized and Regularized Training Framework For Processing-In-Memory Accelerators

2020 25th Asia and South Pacific Design Automation Conference (ASP-DAC)(2020)

引用 23|浏览126
暂无评分
摘要
Convolutional Neural Networks (CNNs) have made breakthroughs in various fields, while the energy consumption becomes enormous. Processing-In-Memory (PIM) architectures based on emerging non-volatile memory (e.g., Resistive Random Access Memory, RRAM) have demonstrated great potential in improving the energy efficiency of CNN computing. However, there is still much room for improvement in the energy efficiency of existing PIM architectures. On the one hand, current work shows that high resolution Analog-to-Digital Converters (ADCs) are required for maintaining computing accuracy, but they dominate more than 60% energy consumption of the entire system, damaging the energy efficiency benefits of PIM. On the other hand, the characteristic of computing in the analog domain in PIM accelerators leads to the computing energy consumption is influenced by the specific input and weight values. However, as far as we know, there is no energy efficiency optimization method based on this characteristic in existing work. To solve these problems, in this paper, we propose an energy-efficient quantized and regularized training framework for PIM accelerators, which consists of a PIM-based non-uniform activation quantization scheme and an energy-aware weight regularization method. The proposed framework can improve the energy efficiency of PIM architectures by reducing the ADC resolution requirements and training low energy consumption CNN models for PIM, with little accuracy loss. The experimental results show that the proposed training framework can reduce the resolution of ADCs by 2 bits and the computing energy consumption in the analog domain by 35%. The energy efficiency, therefore, can be enhanced by $3.4 \times$ in our proposed training framework.
更多
查看译文
关键词
computing energy consumption,regularized training framework,processing-in-memory accelerators,processing-in-memory architectures,emerging nonvolatile memory,resistive random access memory,existing PIM architectures,energy consumption,energy efficiency benefits,PIM accelerators,energy efficiency optimization method,PIM-based nonuniform activation quantization scheme,energy-aware weight regularization method,low energy consumption CNN models,energy-efficient quantized framework,word length 2.0 bit
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要