Neuro-Inspired Computing: From Resistive Memory to Optics

european quantum electronics conference(2019)

引用 0|浏览31
暂无评分
摘要
Recent years have seen marked developments in deep neural networks (DNNs) stemming from advances in hardware and increasingly large datasets. DNNs are now routinely used in domains including computer vision and language processing. At their core, DNNs rely heavily on multiply-accumulate (MAC) operations making them well-suited for the highly parallel computational abilities of GPUs. GPUs, however, are von Neumann in architecture and physically separate memory blocks from computational blocks. This exacts an unavoidable time and energy cost associated with data transport known as the von-Neumann bottleneck. While incremental advances in digital hardware accelerators mitigating the von Neumann bottleneck will continue, we explore the potentially game-changing advantages of non-von Neumann architectures that perform MAC operations within the memory. This is achieved using a crossbar array of analog memory as shown in Fig. 1, which serves as the basis of our analog DNN hardware accelerators, and is amenable to both DNN training and forward inference [1], [2]. Recent work from our group has shown analog DNN hardware accelerators capable of 280× speedup in per area throughput while also providing 100× increase in energy efficiency over state-of-the-art GPUs [3].
更多
查看译文
关键词
neuro-inspired,marked developments,deep neural networks,DNN,increasingly large datasets,computer vision,language processing,multiply-accumulate operations,highly parallel computational abilities,physically separate memory blocks,computational blocks,unavoidable time,energy cost,data transport,von-Neumann bottleneck,incremental advances,digital hardware accelerators,von Neumann bottleneck,nonvon Neumann architectures,MAC operations,analog memory,analog DNN hardware accelerators,state-of-the-art GPU,game-changing advantages
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要