谷歌浏览器插件
订阅小程序
在清言上使用

A Reconfigurable 16kb AND8T SRAM Macro with Improved Linearity for Multibit Compute-In Memory of Artificial Intelligence Edge Devices

IEEE journal on emerging and selected topics in circuits and systems(2022)

引用 7|浏览20
暂无评分
摘要
Compute-in Memory (CIM) has been a promising candidate to perform the energy-efficient multiply-and-accumulate (MAC) operations of the modern Artificial Intelligence (AI) edge devices. This work proposes a multi-bit precision (4b input, 4b weight, and 4b output) 128 × 128 SRAM CIM architecture. The 4b input is implemented using the voltage-scaling and charge-sharing-based scheme. To achieve efficient computation with improved linearity, a novel AND-logic-based 8T SRAM cell (AND8T) is proposed. To address the non-idealities of analog voltage or current-based operations, the proposed AND8T employs the charge-domain-based computation by overlaying a metal-oxide-metal capacitor (MOM cap) with no area overhead. The proposed AND8T mitigates the linearity issue of MAC operations which is highly desirable for the reliable operation of complex neural networks (CNNs). The proposed 16Kb macro asserts 128 inputs in parallel and processes a 128 4b dot-product in a single cycle for the array column (a single neuron). The macro can also be reconfigured for the 64 or 32 4b parallel inputs based on the need of CNN models. The AND8T SRAM macro is fabricated in a 65nm node and achieves an energy efficiency of 301.08 TOPS/W for 16 parallel neurons output, with 128 4b MAC operations at 10MHz clock frequency and 1V supply. The implemented macro supports up to 100MHz of clock frequency and occupies 0.124mm 2 of chip area while achieving the 96.05% and 87% classification accuracy for MNIST and CIFAR-10 datasets.
更多
查看译文
关键词
Random access memory,Voltage,Linearity,Artificial intelligence,SRAM cells,Performance evaluation,Neurons,SRAM,energy-efficiency,bit-precision,multiply-and-accumulate (MAC),compute-in-memory (CIM)
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要