Deep learning acceleration based on in-memory computing.
IBM Journal of Research and Development(2019)
摘要
Performing computations on conventional von Neumann computing systems results in a significant amount of data being moved back and forth between the physically separated memory and processing units. This costs time and energy, and constitutes an inherent performance bottleneck. In-memory computing is a novel non-von Neumann approach, where certain computational tasks are performed in the memory it...
更多查看译文
关键词
Computer architecture,Neurons,Training,Performance evaluation,Task analysis,Analog memory,Deep learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络