Huffman Cache Trails.

International Symposium on Smart Electronic Systems(2023)

引用 0|浏览0
暂无评分
摘要
This paper presents an alternate cache design to reduce energy consumption by making the switched capacitance of these memory blocks inversely proportional to the access frequency of their data sets. Data sets tend to have skewed access frequencies. This trend is further amplified with the emergence of Big Data driven computing and deep learning. Such asymmetry presents an optimization opportunity. The memory hierarchy in the processor micro-architecture, however, is designed to switch the same amount of capacitance for any accessed data bit (byte, word, or block). In other words, it has a symmetric or oblivious switched capacitance structure. Such symmetric design for the memory organization, which is one of the more pronounced energy consumers, is not a good fit. The Huffman coding algorithm allocates code lengths inversely proportional to the event probability. Inspired by this idea, this paper develops a novel cache organization. A traditional cache is organized into non-uniformly sized banks called Trails. Data sets are allocated to a trail with size inversely proportional to their access frequency such that the smallest trail is occupied by the most frequently accessed data and larger trails are occupied by less frequently accessed data. This reduces the expected switched capacitance in the same way Huffman coding reduces expected code length. The proposed organization was modeled for the Level-1 Data (L1D) cache in the GEM5 simulator and was tested with the SPEC 2006 CPU benchmarks. It was found to reduce energy consumption of the L1D caches by 54% with 4% performance (latency cycles) overhead.
更多
查看译文
关键词
Benchmark,Big Data,Code Length,Coding Tree,Performance Overhead,Memory Hierarchy,Energy Conservation,Arbitration,Root Node,Access Time,Design Space,Design Points,Active Switches,Power Management,Access Patterns,Dynamic Energy,Compression Algorithm,Lower Switching,Lossless Compression,Design Space Exploration,Access Block,Reference Count,L2 Cache,Word Line
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要