Energy Consumption Analysis of Instruction Cache Prefetching Methods

2023 INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE AND HIGH PERFORMANCE COMPUTING WORKSHOPS, SBAC-PADW(2023)

Cited 0|Views24
No score
Abstract
Frequent instruction cache (L1-I) misses pose a significant performance bottleneck in modern processors, especially for applications with large instruction footprints, such as server applications. To address the L1-I misses, there have been various proposals for L1-I prefetching over the past two decades. The designers of L1-I prefetchers primarily focused on enhancing performance while minimizing the area overhead. However, they paid little attention to the resulting increase in energy consumption due to incorporating an L1-I prefetcher. Furthermore, prior works assume L1-I prefetcher's energy consumption is mainly due to its area overhead. In this work, we demonstrate that a substantial proportion of the energy consumption associated with using an L1-I prefetcher is attributed to the increased L1-I accesses initiated by the L1-I prefetcher. To compensate for the energy consumption of more accesses to the L1-I, we propose an approach to decrease energy per access to the L1-I by reducing its associativity. Our experimental results demonstrate that reducing L1-I associativity from 8 to 2 effectively reduces the energy consumption of L1I prefetchers. As a result, the energy saving achieved through our approach (on average 113.7 nJ/ki) compensates for the energy consumption overhead caused by the L1-I prefetcher on the baseline system, with the average and the highest energy overhead at 41.6 nJ/ki and 74.8 nJ/ki, respectively, while the associated performance loss (0.8% on average) remains negligible.
More
Translated text
Key words
energy consumption,instruction prefetching,instruction cache,cache associativity
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined