Machine Learning Training on a Real Processing-in-Memory System

2022 IEEE Computer Society Annual Symposium on VLSI (ISVLSI)(2022)

引用 1|浏览31
暂无评分
摘要
Machine learning (ML) algorithms [1]–[6] have become ubiquitous in many fields of science and technology due to their ability to learn from and improve with experience with minimal human intervention. These algorithms train by updating their model parameters in an iterative manner to improve the overall prediction accuracy. However, training machine learning algorithms is a computation-ally intensive process, which requires large amounts of training data. Accessing training data in current processor-centric systems (e.g., CPU, GPU) implies costly data movement between memory and processors, which results in high energy consumption and a large percentage of the total execution cycles. This data movement can become the bottleneck of the training process, if there is not enough computation and locality to amortize its cost.
更多
查看译文
关键词
machine learning,processing-in-memory,regression,classification,clustering,benchmarking
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要