A Machine Learning Based Load Value Approximator Guided by the Tightened Value Locality

GLSVLSI '23: Proceedings of the Great Lakes Symposium on VLSI 2023(2023)

引用 0|浏览0
暂无评分
摘要
This paper addresses two essential memory bottlenecks: 1) memory wall, and 2) bandwidth wall. To accomplish this objective, we propose a machine learning (ML) based model that estimates the values to be loaded from the memory by a wide range of error-resilient applications. The proposed model exploits the feature of tightened value locality, which consists of a periodic load of few unique values. The proposed ML-based load value approximator (LVA) requires minimal overhead as it relies on a hash that encodes the history of events, e.g., history of accessed addresses, and values that can be extracted from the load instruction to be approximated. The proposed LVA completely eliminates memory accesses, i.e., 100% of accesses, in runtime and thus addresses the issue of memory wall and bandwidth wall. Compared to related work, our LVA delivers a maximum accuracy of 95.16% while offering a higher reduction in memory accesses.
更多
查看译文
关键词
Approximate Computing, Approximate Cache, Approximate Memory, Approximate Load Value, Machine Learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要