A Heterogeneous Microprocessor Based on All-Digital Compute-in-Memory for End-to-End AIoT Inference

IEEE Transactions on Circuits and Systems II: Express Briefs(2023)

引用 1|浏览5
暂无评分
摘要
Deploying neural network (NN) models on Internet-of-Things (IoT) devices is important to enable artificial intelligence (AI) on the edge realizing AI-of-Things (AIoT). However, high energy consumption and bandwidth requirement of NN models restricts AI applications on battery-limited equipments. Compute-In-Memory (CIM), featured with high energy efficiency, provides new opportunities for the IoT deployment of NN. However, the design of CIM-based full system is still at the early stage, lacking system-level demonstration and vertical optimization for running end-to-end AI applications. In this brief, we demonstrate a low-power heterogeneous microprocessor System-on-Chip (SoC) with an all-digital SRAM CIM accelerator and rich data acquisition interfaces for end-to-end AIoT NN inference. A dedicated reconfigurable dataflow controller for CIM computation greatly lowers bandwidth requirement on the system bus and improves execution efficiency. The all-digital SRAM CIM array embeds NAND-based bit-serial multiplication within the readout sense amplifier balancing the storage density and system-level throughput. Our chip achieves a throughput of 12.8 GOPS, with 10 TOPS/W energy efficiency. Benchmarked by the four tasks in MLPerf Tiny, experimental results show 1.8x to 2.9x inference speedup over a baseline CIM processor.
更多
查看译文
关键词
heterogeneous microprocessor,all-digital,compute-in-memory,end-to-end
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要