Interconnect-Aware Area and Energy Optimization for In-Memory Acceleration of DNNs

IEEE Design & Test(2020)

引用 30|浏览20
暂无评分
摘要
State-of-the-art in-memory computing (IMC) architectures employ an array of homogeneous tiles and severely underutilize processing elements (PEs). In this article, the authors propose an area and energy optimization methodology to generate a heterogeneous IMC architecture coupled with an optimized Network-on-Chip (NoC) for deep neural network (DNN) acceleration. -Yiran Chen, Duke University.
更多
查看译文
关键词
In-Memory Computing,Deep Neural Networks,Neural Network Accelerator,Network-on-Chip,Interconnect,RRAM
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要