A Flexible In-Memory Computing Architecture for Heterogeneously Quantized CNNs

2021 IEEE Computer Society Annual Symposium on VLSI (ISVLSI)(2021)

引用 5|浏览10
暂无评分
摘要
Inferences using Convolutional Neural Networks (CNNs) are resource and energy intensive. Therefore, their execution on highly constrained edge devices demands the careful co-optimization of algorithms and hardware. Addressing this challenge, in this paper we present a flexible In-Memory Computing (IMC) architecture and circuit, able to scale data representations to varying bitwidths at run-time, w...
更多
查看译文
关键词
Degradation,Quantization (signal),Computer architecture,Parallel processing,Very large scale integration,Robustness,Inference algorithms
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要