Chrome Extension
WeChat Mini Program
Use on ChatGLM

A 28 nm 66.8 TOPS/W Sparsity-Aware Dynamic-Precision Deep-Learning Processor.

VLSI Technology and Circuits(2023)

Cited 0|Views6
No score
Abstract
The required precision for deep neural network (DNN) models strongly depends on sparsity and compactness. This paper presents a heterogeneous DNN accelerator performing dynamic-precision computing adapted to sparsity. Simulation shows that the proposed dynamic precision computing successfully covers EfficientNets and Transformers with a negligible accuracy loss. The accelerator, fabricated in a 28nm LP CMOS, achieves a peak energy efficiency of 66.8 TOPS/W with a peak performance of 4.2 TOPS.
More
Translated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined