谷歌浏览器插件
订阅小程序
在清言上使用

Adaptive compression for instruction code of Coarse Grained Reconfigurable Architectures

FPT(2013)

引用 8|浏览17
暂无评分
摘要
Coarse Grained Reconfigurable Architecture (CGRA) achieves high performance by exploiting instruction-level parallelism with software pipeline. Large instruction memory is, however, a critical problem of CGRA, which requires large silicon area and power consumption. Code compression is a promising technique to reduce the memory area, bandwidth requirements, and power consumption. We present an adaptive code compression scheme for CGRA instructions based on dictionary-based compression, where compression mode and dictionary contents are adaptively selected for each execution kernel and compression group. In addition, it is able to design hardware decompressor efficiently with two-cycle latency and negligible silicon overhead. The proposed method achieved an average compression ratio 0.52 in a CGRA of 16-functional unit array with the experiments of well-optimized applications.
更多
查看译文
关键词
coarse grained reconfigurable architectures,negligible silicon overhead,adaptive code compression scheme,well-optimized applications,bandwidth requirements,power aware computing,dictionary-based compression,power consumption,16-functional unit array,microprocessor chips,dictionary contents,dictionary-based code compression,cgra,hardware decompressor,reconfigurable architectures,instruction-level parallelism,instruction code,silicon area,large instruction memory,two-cycle latency,memory area,software pipeline,pipeline processing
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要