Chrome Extension
WeChat Mini Program
Use on ChatGLM

XNOR-SRAM: In-Memory Computing SRAM Macro for Binary/Ternary Deep Neural Networks

IEEE journal of solid-state circuits(2020)

Cited 371|Views86
No score
Abstract
We present an in-memory computing SRAM macro that computes XNOR-and-accumulate in binary/ternary deep neural networks on the bitline without row-by-row data access. It achieves 33X better energy and 300X better energy-delay product than digital ASIC, and also achieves significantly higher accuracy than prior in-SRAM computing macro (e.g., 98.3% vs. 90% for MNIST) by being able to support the mainstream DNN/CNN algorithms.
More
Translated text
Key words
Random access memory,Hardware,System-on-chip,Transistors,Computer architecture,Neural networks,Complexity theory,Binary weights,deep neural networks (DNNs),ensemble learning,in-memory computing (IMC),SRAM,ternary activations
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined