A Partitioned Memory Architecture with Prefetching for Efficient Video Encoders.

PDCAT(2022)

引用 0|浏览2
暂无评分
摘要
A hardware video encoder based on recent video coding standards such as HEVC and VVC needs to efficiently handle a massive number of memory accesses to search motion vectors. To this end, first, this paper preliminarily evaluates the memory access behavior of a hardware video encoding pipeline. The preliminary evaluation suggests that the behavior of the early stages of the pipeline, accessing the wide areas of reference frames for the rough search, is quite different from those of the subsequent ones, accessing the small areas of them for the precise search. Therefore, this paper proposes a partitioned memory architecture for the hardware video encoding pipeline. This architecture adopts a split cache structure that consists of a front-end cache and a back-end cache. The front-end cache stores shrunk reference frames and provides them for the rough search in the early stages. Normal reference frames for the precise search are provided only to the subsequent stages through the back-end cache. As a result, this structure can reduce the memory bandwidth requirement. On the other hand, the split cache structure cannot reuse the data loaded by the early stages. It increases cache misses in the subsequent stages and may violate the deadline of memory accesses for real-time encoding. To solve this problem, this paper also designs and implements a coding tree unit (CTU) prefetcher to the back-end cache. The CTU prefetcher loads the data used by the subsequent stages without waiting for the results of the early stages. The evaluation results show that the proposed memory system can successfully reduce the cache miss rate and the deadline miss rate in the subsequent stages. As a result, the proposed memory architecture can contribute to satisfying the demands for real-time encoding while reducing energy consumption.
更多
查看译文
关键词
efficient video encoders,partitioned memory architecture,prefetching
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要