kNN-CAM: A k-Nearest Neighbors-based Configurable Approximate Floating Point Multiplier

20th International Symposium on Quality Electronic Design (ISQED)(2019)

引用 35|浏览12
暂无评分
摘要
In many real computations such as arithmetic operations in hidden layers of a neural network, some amounts of inaccuracies can be tolerated without degrading the final results (e.g., maintaining the same level of accuracy for image classification). This paper presents design of kNN-CAM, a k-Nearest Neighbors (kNN)-based Configurable Approximate floating point Multiplier. kNN-CAM utilizes approximate computing opportunities to deliver significant area and energy savings. A kNN engine is trained on a sufficiently large set of input data to learn the quantity of bit truncation that can be performed in each floating point input with the goal of minimizing energy and area. Next, this trained engine is used to predict the level of approximation for unseen data. Experimental results show that kNN-CAM provides about 67% area saving and 19% speedup while losing only 4.86% accuracy when compared to a 100% accurate multiplier. Furthermore, the application of kNN-CAM in implementation of a handwritten digit recognition provides 47.2% area saving while the accuracy is dropped by only 0.3%.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要